Update README.md
Browse files
README.md
CHANGED
@@ -48,7 +48,7 @@ GGML versions are not yet provided, as there is not yet support for SuperHOT in
|
|
48 |
## Repositories available
|
49 |
|
50 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-GPTQ)
|
51 |
-
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-
|
52 |
* [Eric's base unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored)
|
53 |
|
54 |
## How to easily download and use this model in text-generation-webui with ExLlama
|
|
|
48 |
## Repositories available
|
49 |
|
50 |
* [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-GPTQ)
|
51 |
+
* [Unquantised SuperHOT fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Superhot-8K-fp16)
|
52 |
* [Eric's base unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/ehartford/Wizard-Vicuna-30B-Uncensored)
|
53 |
|
54 |
## How to easily download and use this model in text-generation-webui with ExLlama
|