Edit model card

ggml versions of OpenLLaMa 3B v2

Use with llama.cpp

Support is now merged to master branch.

Newer quantizations

There are now more quantization types in llama.cpp, some lower than 4 bits. Currently these are not well supported because of technical reasons. If you want to use them, you have to build llama.cpp (from build 829 (ff5d58f)) with the LLAMA_QKK_64 Make or CMake variable enabled (see PR #2001). Then you can quantize the F16 or maybe Q8_0 version to what you want.

Perplexity on wiki.test.raw

Coming soon ...

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .