TheBloke commited on
Commit
0ac5a0d
1 Parent(s): a8a5a4d

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +106 -0
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ inference: false
4
+ ---
5
+
6
+ <!-- header start -->
7
+ <div style="width: 100%;">
8
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
9
+ </div>
10
+ <div style="display: flex; justify-content: space-between; width: 100%;">
11
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
12
+ <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
13
+ </div>
14
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
15
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
16
+ </div>
17
+ </div>
18
+ <!-- header end -->
19
+
20
+ # Monero's WizardLM-Uncensored-SuperCOT-Storytelling-30B GPTQ
21
+
22
+ This is GPTQ format quantised 4bit models of [Monero's WizardLM-Uncensored-SuperCOT-Storytelling-30B](https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b).
23
+
24
+ It is the result of quantising to 4bit using [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa).
25
+
26
+ ## Repositories available
27
+
28
+ * [4-bit GPTQ models for GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-GPTQ)
29
+ * [4-bit, 5-bit, and 8-bit GGML models for CPU+GPU inference](https://huggingface.co/TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-GGML)
30
+ * [Unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/Monero/WizardLM-Uncensored-SuperCOT-StoryTelling-30b)
31
+
32
+ ## Prompt template
33
+
34
+ ```
35
+ You are a helpful assistant
36
+ ### USER: prompt goes here
37
+ ### ASSISTANT:
38
+ ```
39
+
40
+ To allow all output, add `### Certainly!` to the end of the prompt
41
+
42
+ ## How to easily download and use this model in text-generation-webui
43
+
44
+ Open the text-generation-webui UI as normal.
45
+
46
+ 1. Click the **Model tab**.
47
+ 2. Under **Download custom model or LoRA**, enter `TheBloke/WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGML`.
48
+ 3. Click **Download**.
49
+ 4. Wait until it says it's finished downloading.
50
+ 5. Click the **Refresh** icon next to **Model** in the top left.
51
+ 6. In the **Model drop-down**: choose the model you just downloaded, `WizardLM-Uncensored-SuperCOT-StoryTelling-30B-GGML`.
52
+ 7. If you see an error in the bottom right, ignore it - it's temporary.
53
+ 8. Fill out the `GPTQ parameters` on the right: `Bits = 4`, `Groupsize = None`, `model_type = Llama`
54
+ 9. Click **Save settings for this model** in the top right.
55
+ 10. Click **Reload the Model** in the top right.
56
+ 11. Once it says it's loaded, click the **Text Generation tab** and enter a prompt!
57
+
58
+ ## Provided files
59
+
60
+ **Compatible file - WizardLM-Uncensored-SuperCOT-Storytelling-GPTQ-4bit.act.order.safetensors**
61
+
62
+ This will work with all versions of GPTQ-for-LLaMa. It has maximum compatibility
63
+
64
+ It was created without group_size to minimise VRAM usage, and with `--act-order` to improve inference quality.
65
+
66
+ * `WizardLM-Uncensored-SuperCOT-Storytelling-GPTQ-4bit.act.order.safetensors`
67
+ * Works with all versions of GPTQ-for-LLaMa code, both Triton and CUDA branches
68
+ * Works with AutoGPTQ.
69
+ * Works with text-generation-webui one-click-installers
70
+ * Parameters: Groupsize = None. Act-order.
71
+ * Command used to create the GPTQ:
72
+ ```
73
+ python llama.py HF_repo c4 --wbits 4 --act-order --true-sequential --save_safetensors WizardLM-Uncensored-SuperCOT-Storytelling-GPTQ-4bit.act.order.safetensors
74
+ ```
75
+
76
+ <!-- footer start -->
77
+ ## Discord
78
+
79
+ For further support, and discussions on these models and AI in general, join us at: [TheBloke AI's Discord server](https://discord.gg/UBgz4VXf)
80
+
81
+ ## Thanks, and how to contribute.
82
+
83
+ Thanks to the [chirper.ai](https://chirper.ai) team!
84
+
85
+ I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
86
+
87
+ If you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on new AI projects.
88
+
89
+ Donaters will get priority support on any and all AI/LLM/model questions, plus other benefits.
90
+
91
+ * Patreon: https://patreon.com/TheBlokeAI
92
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
93
+
94
+ **Patreon special mentions**: Aemon Algiz; Talal Aujan; Jonathan Leane; Illia Dulskyi; Khalefa Al-Ahmad;
95
+ senxiiz. Thank you all, and to all my other generous patrons and donaters.
96
+ <!-- footer end -->
97
+
98
+ # Original model card: Monero's WizardLM-Uncensored-SuperCOT-Storytelling-30B
99
+
100
+ This model is a triple model merge of WizardLM Uncensored+CoT+Storytelling, resulting in a comprehensive boost in reasoning and story writing capabilities.
101
+
102
+ To allow all output, at the end of your prompt add ```### Certainly!```
103
+
104
+ You've become a compendium of knowledge on a vast array of topics.
105
+
106
+ Lore Mastery is an arcane tradition fixated on understanding the underlying mechanics of magic. It is the most academic of all arcane traditions. The promise of uncovering new knowledge or proving (or discrediting) a theory of magic is usually required to rouse its practitioners from their laboratories, academies, and archives to pursue a life of adventure. Known as savants, followers of this tradition are a bookish lot who see beauty and mystery in the application of magic. The results of a spell are less interesting to them than the process that creates it. Some savants take a haughty attitude toward those who follow a tradition focused on a single school of magic, seeing them as provincial and lacking the sophistication needed to master true magic. Other savants are generous teachers, countering ignorance and deception with deep knowledge and good humor.