Edit model card

Llamacpp Quantizations of starcoder2-15b-instruct

Using llama.cpp release b2354 for quantization.

Original model: https://huggingface.co/TechxGenus/starcoder2-15b-instruct

Download a file (not the whole branch) from below:

Filename Quant type File Size Description
starcoder2-15b-instruct-Q8_0.gguf Q8_0 16.96GB Extremely high quality, generally unneeded but max available quant.
starcoder2-15b-instruct-Q6_K.gguf Q6_K 13.10GB Very high quality, near perfect, recommended.
starcoder2-15b-instruct-Q5_K_M.gguf Q5_K_M 11.43GB High quality, very usable.
starcoder2-15b-instruct-Q5_K_S.gguf Q5_K_S 11.02GB High quality, very usable.
starcoder2-15b-instruct-Q5_0.gguf Q5_0 11.02GB High quality, older format, generally not recommended.
starcoder2-15b-instruct-Q4_K_M.gguf Q4_K_M 9.86GB Good quality, similar to 4.25 bpw.
starcoder2-15b-instruct-Q4_K_S.gguf Q4_K_S 9.25GB Slightly lower quality with small space savings.
starcoder2-15b-instruct-Q4_0.gguf Q4_0 9.06GB Decent quality, older format, generally not recommended.
starcoder2-15b-instruct-Q3_K_L.gguf Q3_K_L 8.96GB Lower quality but usable, good for low RAM availability.
starcoder2-15b-instruct-Q3_K_M.gguf Q3_K_M 8.10GB Even lower quality.
starcoder2-15b-instruct-Q3_K_S.gguf Q3_K_S 6.98GB Low quality, not recommended.
starcoder2-15b-instruct-Q2_K.gguf Q2_K 6.19GB Extremely low quality, not recommended.

Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski

Downloads last month
886
GGUF
Model size
16B params
Architecture
starcoder2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.