Edit model card

Exllama v2 Quantizations of Yarn-Mistral-7b-128k

Using turboderp's ExLlamaV2 v0.0.7 for quantization.

Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.

Conversion was done using wikitext-103-raw-v1-test.parquet as calibration dataset.

Original model: https://huggingface.co/NousResearch/Yarn-Mistral-7b-128k

4.0 bits per weight

6.0 bits per weight

8.0 bits per weight

Download instructions

With git:

git clone --single-branch --branch 4.0 https://huggingface.co/bartowski/Yarn-Mistral-7b-128k-exl2

With huggingface hub (credit to TheBloke for instructions):

pip3 install huggingface-hub

To download the main (only useful if you only care about measurement.json) branch to a folder called Yarn-Mistral-7b-128k-exl2:

mkdir Yarn-Mistral-7b-128k-exl2
huggingface-cli download bartowski/Yarn-Mistral-7b-128k-exl2 --local-dir Yarn-Mistral-7b-128k-exl2 --local-dir-use-symlinks False

To download from a different branch, add the --revision parameter:

mkdir Yarn-Mistral-7b-128k-exl2
huggingface-cli download bartowski/Yarn-Mistral-7b-128k-exl2 --revision 4.0 --local-dir Yarn-Mistral-7b-128k-exl2 --local-dir-use-symlinks False
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Dataset used to train bartowski/Yarn-Mistral-7b-128k-exl2