TheBloke's picture
Update README.md
e567ff5
|
raw
history blame
3.83 kB
---
license: other
datasets:
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
tags:
- uncensored
inference: false
---
# WizardLM-13B-Uncensored GGML
This is GGML format quantised 4-bit, 5-bit and 8-bit models of [Eric Hartford's 'uncensored' training of WizardLM 13B Uncensored](https://huggingface.co/ehartford/WizardLM-13B-Uncensored).
These files are for CPU (+ CUDA) inference using [llama.cpp](https://github.com/ggerganov/llama.cpp).
## Repositories available
* [4-bit, 5-bit and 8-bit GGML models for CPU inference](https://huggingface.co/TheBloke/wizardLM-13B-Uncensored-GGML).
* [4bit's GPTQ 4-bit model for GPU inference](https://huggingface.co/4bit/WizardLM-13B-Uncensored-4bit-128g).
* [Eric's original float16 HF format model for GPU inference and further conversions](https://huggingface.co/ehartford/WizardLM-13B-Uncensored).
## THESE FILES REQUIRE LATEST LLAMA.CPP (May 12th 2023 - commit b9fd7ee)!
llama.cpp recently made a breaking change to its quantisation methods.
I have quantised the GGML files in this repo with the latest version. Therefore you will require llama.cpp compiled on May 12th or later (commit `b9fd7ee` or later) to use them.
If you are currently unable to update llama.cpp, eg because you use a UI which hasn't updated yet, you can find a [q5_1 GGML for the older llama.cpp code here](https://huggingface.co/TehVenom/WizardLM-13B-Uncensored-Q5_1-GGML).
## Provided files
| Name | Quant method | Bits | Size | RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
`wizardLM-13B-Uncensored.ggml.q4_0.bin` | q4_0 | 4bit | 8.14GB | 10.5GB | 4-bit. |
`wizardLM-13B-Uncensored.ggml.q4_1.bin` | q4_1 | 5bit | 9.76GB | 12.25GB | 4-bit. Higher accuracy than q4_0 but not as high as q5_0. However has quicker inference than q5 models. |
`wizardLM-13B-Uncensored.ggml.q5_0.bin` | q5_0 | 5bit | 8.95GB | 11.0GB | 5-bit. Higher accuracy, higher resource usage and slower inference. |
`wizardLM-13B-Uncensored.ggml.q5_1.bin` | q5_1 | 5bit | 9.76GB | 12.25GB | 5-bit. Even higher accuracy, and higher resource usage and slower inference. |
`wizardLM-13B-Uncensored.ggml.q8_0.bin` | q8_0 | 8bit | 14.6GB | 17GB | 8-bit. Almost indistinguishable from float16. Huge resource use and slow. Not recommended for normal use. |
## How to run in `llama.cpp`
I use the following command line; adjust for your tastes and needs:
```
./main -t 8 -m wizardLM-13B-Uncensored.ggml.q5_0.bin --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "### Instruction: write a story about llamas ### Response:"
```
Change `-t 8` to the number of physical CPU cores you have.
## How to run in `text-generation-webui`
GGML models can be loaded into text-generation-webui by installing the llama.cpp module, then placing the ggml model file in a model folder as usual.
Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
# Original model card
This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
Shout out to the open source AI/ML community, and everyone who helped me out.
Note:
An uncensored model has no guardrails.
You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
Publishing anything this model generates is the same as publishing it yourself.
You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.