TheBloke commited on
Commit
8f49612
1 Parent(s): 84fed7b

Updating model files

Browse files
Files changed (1) hide show
  1. README.md +23 -3
README.md CHANGED
@@ -6,6 +6,17 @@ tags:
6
  - uncensored
7
  inference: false
8
  ---
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  # WizardLM-13B-Uncensored GGML
11
 
@@ -51,16 +62,25 @@ GGML models can be loaded into text-generation-webui by installing the llama.cpp
51
 
52
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
53
 
54
- Note: at this time text-generation-webui may not support the new May 19th llama.cpp quantisation methods for q4_0, q4_1 and q8_0 files.
55
 
 
 
 
 
 
 
 
 
 
56
  # Original model card
57
 
58
  This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
59
 
60
  Shout out to the open source AI/ML community, and everyone who helped me out.
61
 
62
- Note:
63
- An uncensored model has no guardrails.
64
  You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
65
  Publishing anything this model generates is the same as publishing it yourself.
66
  You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.
 
6
  - uncensored
7
  inference: false
8
  ---
9
+ <div style="width: 100%;">
10
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
11
+ </div>
12
+ <div style="display: flex; justify-content: space-between; width: 100%;">
13
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
14
+ <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
15
+ </div>
16
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
17
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
18
+ </div>
19
+ </div>
20
 
21
  # WizardLM-13B-Uncensored GGML
22
 
 
62
 
63
  Further instructions here: [text-generation-webui/docs/llama.cpp-models.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp-models.md).
64
 
65
+ ## Want to support my work?
66
 
67
+ I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
68
+
69
+ So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
70
+
71
+ Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
72
+
73
+ * Patreon: coming soon! (just awaiting approval)
74
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
75
+ * Discord: https://discord.gg/UBgz4VXf
76
  # Original model card
77
 
78
  This is WizardLM trained with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.
79
 
80
  Shout out to the open source AI/ML community, and everyone who helped me out.
81
 
82
+ Note:
83
+ An uncensored model has no guardrails.
84
  You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.
85
  Publishing anything this model generates is the same as publishing it yourself.
86
  You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.