Edit model card

These are quick GGUF quantizations of DiscoResearch/Llama3-DiscoLeo-Instruct-8B-v0.1.

They were done for testing purposes and include:

  • one with an older llama.cpp version without bpe pre-tokenizer fix, done per fp16 binary
  • one with an older llama.cpp version without bpe pre-tokenizer fix, done per fp32 binary
  • one with a recent version and the bpefix

Currently the GGUFs perform below expectations, the -mlx performs best in comparison. Any ideas why?

Downloads last month
48
GGUF
Model size
8.03B params
Architecture
llama

4-bit

32-bit

Inference API
Unable to determine this model's library. Check the docs .