Edit model card

Model Card for gemma-2-9b-it-4bit

馃毃 This model is a 4-bit quantized version of Google's gemma-2-9b-it using bitsandbytes. You can find the unquantized version of gemma-2-9b-it here.

Downloads last month
4
Safetensors
Model size
5.21B params
Tensor type
F32
FP16
U8
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.