mradermacher
commited on
Commit
•
de2d190
1
Parent(s):
b6d629a
auto-patch README.md
Browse files
README.md
CHANGED
@@ -36,6 +36,7 @@ more details, including on how to concatenate multi-part files.
|
|
36 |
|
37 |
| Link | Type | Size/GB | Notes |
|
38 |
|:-----|:-----|--------:|:------|
|
|
|
39 |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-Flight-Multi-Turn-V3-DPO-GGUF/resolve/main/gemma-2-2b-it-Flight-Multi-Turn-V3-DPO.f16.gguf) | f16 | 5.3 | 16 bpw, overkill |
|
40 |
|
41 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
@@ -55,6 +56,6 @@ questions you might have and/or if you want some other model quantized.
|
|
55 |
|
56 |
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
57 |
me use its servers and providing upgrades to my workstation to enable
|
58 |
-
this work in my free time.
|
59 |
|
60 |
<!-- end -->
|
|
|
36 |
|
37 |
| Link | Type | Size/GB | Notes |
|
38 |
|:-----|:-----|--------:|:------|
|
39 |
+
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-Flight-Multi-Turn-V3-DPO-GGUF/resolve/main/gemma-2-2b-it-Flight-Multi-Turn-V3-DPO.IQ3_S.gguf) | IQ3_S | 1.5 | beats Q3_K* |
|
40 |
| [GGUF](https://huggingface.co/mradermacher/gemma-2-2b-it-Flight-Multi-Turn-V3-DPO-GGUF/resolve/main/gemma-2-2b-it-Flight-Multi-Turn-V3-DPO.f16.gguf) | f16 | 5.3 | 16 bpw, overkill |
|
41 |
|
42 |
Here is a handy graph by ikawrakow comparing some lower-quality quant
|
|
|
56 |
|
57 |
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
|
58 |
me use its servers and providing upgrades to my workstation to enable
|
59 |
+
this work in my free time.
|
60 |
|
61 |
<!-- end -->
|