shaowenchen commited on
Commit
a4e3a4f
1 Parent(s): 68bd6b9

improvement: readme

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -48,21 +48,21 @@ docker run --rm -it -p 8000:8000 -v /path/to/models:/models -e MODEL=/models/ggu
48
 
49
  | Name | Quant method | Compressed Size |
50
  | -------------------------------------------------- | ------------ | --------------- |
51
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q2_K` | Q2_K | 3.68 GB |
52
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q3_K` | Q3_K | 4.16 GB |
53
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q3_K_L` | Q3_K_L | 4.46 GB |
54
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q3_K_S` | Q3_K_S | 3.81 GB |
55
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q4_0` | Q4_0 | 4.7 GB |
56
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q4_1` | Q4_1 | 5.1 GB |
57
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q4_K` | Q4_K | 4.95 GB |
58
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q4_K_S` | Q4_K_S | 4.73 GB |
59
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q5_0` | Q5_0 | 5.3 GB |
60
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q5_1` | Q5_1 | 5.7 GB |
61
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q5_K` | Q5_K | 5.5 GB |
62
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q5_K_S` | Q5_K_S | 5.3 GB |
63
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q6_K` | Q6_K | 6.3 GB |
64
- | `shaowenchen/colossal-llama-2-7b-base-gguf:Q8_0` | Q8_0 | 8.2 GB |
65
- | `shaowenchen/colossal-llama-2-7b-base-gguf:full` | full | 14 GB |
66
 
67
  Usage:
68
 
 
48
 
49
  | Name | Quant method | Compressed Size |
50
  | -------------------------------------------------- | ------------ | --------------- |
51
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q2_K` | Q2_K | 3.24 GB |
52
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q3_K` | Q3_K | 3.68 GB |
53
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q3_K_L` | Q3_K_L | 3.98 GB |
54
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q3_K_S` | Q3_K_S | 3.38 GB |
55
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q4_0` | Q4_0 | 4.05 GB |
56
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q4_1` | Q4_1 | 4.47 GB |
57
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q4_K` | Q4_K | 4.39 GB |
58
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q4_K_S` | Q4_K_S | 4.18 GB |
59
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q5_0` | Q5_0 | 4.99 GB |
60
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q5_1` | Q5_1 | 5.35 GB |
61
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q5_K` | Q5_K | 5.12 GB |
62
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q5_K_S` | Q5_K_S | 5 GB |
63
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q6_K` | Q6_K | 5.82 GB |
64
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:Q8_0` | Q8_0 | 7.18 GB |
65
+ | `shaowenchen/colossal-llama-2-7b-base-gguf:full` | full | 10.49 GB |
66
 
67
  Usage:
68