Joseph717171 commited on
Commit
9abcd1c
β€’
1 Parent(s): fc914fc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -1,4 +1,4 @@
1
  Custom GGUF quants of arcee-ai’s [Llama-3.1-SuperNova-Lite-8B](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite), where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! 🧠πŸ”₯πŸš€
2
 
3
- Update: For some reason, the model was initially smaller than LLama-3.1-8B-Instruct after quantizing. We have since, rectified this: if you want the most intelligent and most capable quantized GGUF version of Llama-3.1-SuperNova-Lite-8.0B, use the OF32.EF32.IQuants.
4
  The original OQ8_0.EF32.IQuants will remain in the repo for those who want to use them. Cheers! 😁
 
1
  Custom GGUF quants of arcee-ai’s [Llama-3.1-SuperNova-Lite-8B](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite), where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! 🧠πŸ”₯πŸš€
2
 
3
+ Update: For some reason, the model was initially smaller than LLama-3.1-8B-Instruct after quantizing. This has since been rectified: if you want the most intelligent and most capable quantized GGUF version of Llama-3.1-SuperNova-Lite-8.0B, use the OF32.EF32.IQuants.
4
  The original OQ8_0.EF32.IQuants will remain in the repo for those who want to use them. Cheers! 😁