Edit model card

image/png

ibm-granite/granite-3b-code-instruct-GGUF

This is the Q4_K_M converted version of the original ibm-granite/granite-3b-code-instruct. Refer to the original model card for more details.

Use with llama.cpp

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp

# install
make

# run generation
./main -m granite-3b-code-instruct-GGUF/granite-3b-code-instruct.Q4_K_M.gguf -n 128 -p "def generate_random(x: int):" --color
Downloads last month
557
GGUF
Model size
3.48B params
Architecture
llama

4-bit

Inference API (serverless) has been turned off for this model.

Quantized from

Datasets used to train ibm-granite/granite-3b-code-instruct-GGUF

Collection including ibm-granite/granite-3b-code-instruct-GGUF

Evaluation results