Text Generation
Transformers
Safetensors
English
llama
llama-factory
Not-For-All-Audiences
conversational
text-generation-inference
Inference Endpoints
aaronday3 commited on
Commit
fcf4436
1 Parent(s): 21c9804

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -65,9 +65,12 @@ by rivePiPH: <br>
65
 
66
  **Most important tip** swipe 2-3 times if you dont like a response. This model gives wildly differing swipes.
67
 
 
 
 
68
  <h2>Sampling</h2>
69
 
70
- **Use these, they work best:**
71
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/uzVgp1ZMNV_LRx1stLxJ6.png)
72
 
73
  Don't shy away from experimenting after you get a feel for the model though.
@@ -107,9 +110,6 @@ If you don't like it, **you can override** by editing the character message and
107
 
108
  I have tested these settings and they work OK for 16K. Depending on the roleplay complexity and message length, experiment with if the model starts breaking or not. For me, 16K works fine.
109
  <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/3f7JOEnXhKCDcDF4Eiq-B.png" alt="" width="300"/>
110
- <h2>OOC Steering</h2>
111
-
112
- We specifically trained the model to accept instructions in the format "OOC: character should be more assertive" etc. Even thousands of tokens deep into the context. Combining this with editing the output, the model is very steerable.
113
  <h2>Other Important Tips</h2>
114
 
115
  Take active role in the RP and say the type of response you expect. You don't always have to do this, but it helps sometimes. For example instead of *we drink and drink 15 glasses of champagne* say *we drink and drink 15 glasses of champagne, both becoming extremely drunk*
 
65
 
66
  **Most important tip** swipe 2-3 times if you dont like a response. This model gives wildly differing swipes.
67
 
68
+ <h2>OOC Steering</h2>
69
+
70
+ **Use this! It works extremely well.** We specifically trained the model to accept instructions in the format "OOC: character should be more assertive" etc. It works, whether the very first message or thousands of tokens deep into the context. Combining this with editing the output (if you want,) makes the model is very steerable.
71
  <h2>Sampling</h2>
72
 
73
+ Use these:
74
  ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/uzVgp1ZMNV_LRx1stLxJ6.png)
75
 
76
  Don't shy away from experimenting after you get a feel for the model though.
 
110
 
111
  I have tested these settings and they work OK for 16K. Depending on the roleplay complexity and message length, experiment with if the model starts breaking or not. For me, 16K works fine.
112
  <img src="https://cdn-uploads.huggingface.co/production/uploads/630cf5d14ca0a22768bbe10c/3f7JOEnXhKCDcDF4Eiq-B.png" alt="" width="300"/>
 
 
 
113
  <h2>Other Important Tips</h2>
114
 
115
  Take active role in the RP and say the type of response you expect. You don't always have to do this, but it helps sometimes. For example instead of *we drink and drink 15 glasses of champagne* say *we drink and drink 15 glasses of champagne, both becoming extremely drunk*