turboderp's picture
Update README.md
c9b2c3a
EXL2 quants of CodeLlama2-13B-instruct
[2.40 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/2.4bpw)
[2.45 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/2.45bpw)
[2.50 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/2.5bpw)
[2.55 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/2.55bpw)
[2.60 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/2.6bpw)
[2.65 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/2.65bpw)
[2.70 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/2.7bpw)
[3.00 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/3.0bpw)
[3.50 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/3.5bpw)
[4.00 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/4.0bpw)
[4.65 bits per weight](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/tree/4.65bpw)
[measurement.json](https://huggingface.co/turboderp/CodeLlama-13B-instruct-exl2/blob/main/measurement.json)