File size: 2,495 Bytes
f7f1766
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
inference: false
license: other
---

# Tim Dettmers' Guanaco 65B GPTQ

These files are GPTQ 4bit model files for [Tim Dettmers' Guanaco 33B](https://huggingface.co/timdettmers/guanaco-65b).

It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).

## Other repositories available

* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/TheBloke/guanaco-65B-GPTQ)
* [4-bit, 5-bit and 8-bit GGML models for CPU(+GPU) inference](https://huggingface.co/TheBloke/guanaco-65B-GGML)
* [Original unquantised fp16 model in HF format](https://huggingface.co/timdettmers/guanaco-65b)

## How to easily download and use this model in text-generation-webui

Open the text-generation-webui UI as normal.

1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/guanaco-65B-GPTQ`.
3. Click **Download**.
4. Wait until it says it's finished downloading.
5. Click the **Refresh** icon next to **Model** in the top left.
6. In the **Model drop-down**: choose the model you just downloaded, `guanaco-65B-GPTQ`.
7. If you see an error in the bottom right, ignore it - it's temporary.
8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = None`, `model_type = Llama`
9. Click **Save settings for this model** in the top right.
10. Click **Reload the Model** in the top right.
11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!

## Provided files

**Compatible file - Guanaco-65B-GPTQ-4bit.act-order.safetensors**

In the `main` branch you will find `Guanaco-65B-GPTQ-4bit.act-order.safetensors`

This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility.

It was created without groupsize to minimise VRAM requirements, to keep it under 24GB VRAM. It was created with the `--act-order` parameter to maximise accuracy.

* `Guanaco-65B-GPTQ-4bit.act-order.safetensors`
  * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
  * Works with AutoGPTQ
  * Works with text-generation-webui one-click-installers
  * Parameters: Groupsize = None. act-order
  * Command used to create the GPTQ:
    ```
    python llama.py /workspace/process/TheBloke_guanaco-65B-GGML/HF  wikitext2 --wbits 4 --true-sequential --act-order --save_safetensors /workspace/process/TheBloke_guanaco-65B-GGML/gptq/Guanaco-65B-GPTQ-4bit-128g.no-act-order.safetensors
    ```
   
# Original model card

Not provided by original model creator.