Edit model card

Model

[cminja/SFR-Iterative-DPO-LLaMA-3-8B-R-Q4_K_M-GGUF] was converted to GGUF format from Salesforce/SFR-Iterative-DPO-LLaMA-3-8B-R using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Update: Link to the original model card is no longer available. SFR-Iterative-DPO-LLaMA-3-8B-R model appears to be taken down from HF, see reddit1, reddit2 for more details.

Use with llama.cpp

Clone and Build llama.cpp

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
mkdir build
cd build
cmake ..
make

Login to Hugging Face

pip install huggingface_hub
huggingface-cli login

Download the model using Hugging Face CLI

huggingface-cli download cminja/SFR-Iterative-DPO-LLaMA-3-8B-R-Q8_0-GGUF --repo-type model

Usage

Run the model

./bin/main --model ~/.cache/huggingface/hub/models--cminja--SFR-Iterative-DPO-LLaMA-3-8B-R-Q8_0-GGUF/snapshots/2a2dadb2c78cc3d59e8ed4cd7d6b4f635bcd3f12/sfr-iterative-dpo-llama-3-8b-r.Q8_0.gguf -p "Few interesting nuances about WoW leveling are"
Downloads last month
30
GGUF
Model size
8.03B params
Architecture
llama

8-bit

Unable to determine this model's library. Check the docs .