Joseph717171 commited on
Commit
feb54e4
β€’
1 Parent(s): d0885d1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -2,4 +2,5 @@ Custom GGUF quants of arcee-ai’s [Llama-3.1-SuperNova-Lite-8B](https://hugging
2
 
3
  Update: For some reason, the model was initially smaller than LLama-3.1-8B-Instruct after quantizing. This has since been rectified: if you want the most intelligent and most capable quantized GGUF version of Llama-3.1-SuperNova-Lite-8.0B, use the OF32.EF32.IQuants.
4
  The original OQ8_0.EF32.IQuants will remain in the repo for those who want to use them. Cheers! 😁
 
5
  Addendum: I'm stupid. I was comparing my OQ8_0.EF32 IQuants of Llama-3.1-SuperNova-Lite-8B to that of my OQ8_0.EF32 IQuants of Hermes-3-Llama-3.1-8B - thinking my OQ8_0.EF32 IQuants of Hermes-3-Llama-3.1-8B were the same size as my OQ8_0.EF32.IQuants of LLama-3.1-8B-Instruct; they're not: Hereme-3-Llama-3.1-8B is bigger. So, now we have both OQ8_0.EF32.IQuants and OF32.EF32.IQuants, and they're both great quant schemes. The only difference being, of course, OF32.EF32.IQuants have even more accuracy at the expense of more vRAM. Cheers! πŸ˜‚
 
2
 
3
  Update: For some reason, the model was initially smaller than LLama-3.1-8B-Instruct after quantizing. This has since been rectified: if you want the most intelligent and most capable quantized GGUF version of Llama-3.1-SuperNova-Lite-8.0B, use the OF32.EF32.IQuants.
4
  The original OQ8_0.EF32.IQuants will remain in the repo for those who want to use them. Cheers! 😁
5
+
6
  Addendum: I'm stupid. I was comparing my OQ8_0.EF32 IQuants of Llama-3.1-SuperNova-Lite-8B to that of my OQ8_0.EF32 IQuants of Hermes-3-Llama-3.1-8B - thinking my OQ8_0.EF32 IQuants of Hermes-3-Llama-3.1-8B were the same size as my OQ8_0.EF32.IQuants of LLama-3.1-8B-Instruct; they're not: Hereme-3-Llama-3.1-8B is bigger. So, now we have both OQ8_0.EF32.IQuants and OF32.EF32.IQuants, and they're both great quant schemes. The only difference being, of course, OF32.EF32.IQuants have even more accuracy at the expense of more vRAM. Cheers! πŸ˜‚