TheBloke commited on
Commit
0d5efee
1 Parent(s): c7b8f3c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -12,6 +12,7 @@ I have the following Koala model repositories available:
12
  * [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
13
  * [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
14
  * [GPTQ quantized 4bit 13B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g-GGML)
 
15
  **7B models:**
16
  * [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
17
  * [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)
 
12
  * [Unquantized 13B model in HF format](https://huggingface.co/TheBloke/koala-13B-HF)
13
  * [GPTQ quantized 4bit 13B model in `pt` and `safetensors` formats](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g)
14
  * [GPTQ quantized 4bit 13B model in GGML format for `llama.cpp`](https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g-GGML)
15
+
16
  **7B models:**
17
  * [Unquantized 7B model in HF format](https://huggingface.co/TheBloke/koala-7B-HF)
18
  * [Unquantized 7B model in GGML format for llama.cpp](https://huggingface.co/TheBloke/koala-7b-ggml-unquantized)