Edit model card

GPTQ Version of meta-llama/Llama-3.2-1B: 8 bit, 128 groupsize
✅ Tested on vLLM

Downloads last month
0
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for adriabama06/Llama-3.2-1B-Instruct-GPTQ-8bit-128g

Quantized
(38)
this model