File size: 5,027 Bytes
344d12f
03edb90
 
 
344d12f
03edb90
344d12f
 
 
 
 
 
 
 
d813212
051888a
 
 
 
 
 
d813212
344d12f
 
 
 
 
 
 
 
 
adea40d
9fff2cc
adea40d
344d12f
 
 
 
 
 
 
d0c84b3
1192f91
344d12f
 
 
 
 
 
 
 
 
 
 
 
 
3fea678
344d12f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
---
base_model: Sao10K/L3-8B-Stheno-v3.1
quantized_by: Lewdiculous
library_name: transformers
license: cc-by-nc-4.0
inference: false
language:
- en
tags:
- roleplay
- llama3
- sillytavern
---

> [!WARNING]
> **Update:** <br>
> [New and updated version 3.2 here!](https://huggingface.co/Lewdiculous/L3-8B-Stheno-v3.2-GGUF-IQ-Imatrix) <br>
> It includes fixes for common issues! <br>
> **You should prefer it over version 3.1.** <br>

<br>

# #roleplay #sillytavern #llama3

My GGUF-IQ-Imatrix quants for [Sao10K/L3-8B-Stheno-v3.1](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.1).

This is a very promising roleplay model cooked by the amazing Sao10K!

> [!IMPORTANT]
> **Quantization process:** <br>
> For future reference, these quants have been done after the fixes from [**#6920**](https://github.com/ggerganov/llama.cpp/pull/6920) have been merged. <br>
> Imatrix data was generated from the FP16-GGUF and conversions directly from the BF16-GGUF. <br>
> This was a bit more disk and compute intensive but hopefully avoided any losses during conversion. <br>
> If you noticed any issues let me know in the discussions.

> [!NOTE]
> **General usage:** <br>
> Use the latest version of **KoboldCpp**. <br>
> For **8GB VRAM** GPUs, I recommend the **Q4_K_M-imat** (4.89 BPW) quant for up to 12288 context sizes. <br>
>
> **Presets:** <br>
> Some compatible SillyTavern presets can be found [**here (Virt's Roleplay Presets)**](https://huggingface.co/Virt-io/SillyTavern-Presets). <br>
> Check [**discussions such as this one**](https://huggingface.co/Virt-io/SillyTavern-Presets/discussions/5#664d6fb87c563d4d95151baa) for other recommendations and samplers.

> [!TIP]
> **Personal-support:** <br>
> I apologize for disrupting your experience. <br>
> Currently I'm working on moving for a better internet provider. <br>
> If you **want** and you are **able to**... <br>
> You can [**spare some change over here (Ko-fi)**](https://ko-fi.com/Lewdiculous). <br>
>
> **Author-support:** <br>
> You can support the author [**at their own page**](https://ko-fi.com/sao10k).

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/65d4cf2693a0a3744a27536c/JVvOa7FKmO6FObjLdBWBv.jpeg)

## **Original model information:**

<img src="https://w.forfun.com/fetch/cb/cba2205390e517bea1ea60ca0b491af4.jpeg"  style="width: 80%; min-width: 400px; display: block; margin: auto;">

**Model: Llama-3-8B-Stheno-v3.1**

This has been an experimental model I've been working on for a bit. Llama-3 was kind of difficult to work with. 
<br>I also had been hired to create a model for an Organisation, and I used the lessons I learnt from fine-tuning that one for this specific model. Unable to share that one though, unfortunately.
<br>Made from outputs generated by Claude-3-Opus along with Human-Generated Data.


Stheno-v3.1

\- A model made for 1-on-1 Roleplay ideally, but one that is able to handle scenarios, RPGs and storywriting fine.
<br>\- Uncensored during actual roleplay scenarios. # I do not care for zero-shot prompting like what some people do. It is uncensored enough in actual usecases.
<br>\- I quite like the prose and style for this model.

#### Testing Notes
<br>\- Known as L3-RP-v2.1 on Chaiverse, it did decently there [>1200 Elo]
<br>\- Handles character personalities well. Great for 1 on 1 Roleplay sessions.
<br>\- May need further token context & few-shot examples if using it as a Narrator / RPG Roleplaying session. It is able to handle them though.
<br>\- A model leaning towards NSFW, mention explicitly in prompts if you want to steer away. [Avoid Negative Reinforcement]
<br>\- Occasionally spits out leaking XML and nonsense. A regen / swipe instantly fixes that.
<br>\- Unique / Varied Answers when Regenerating answers. Pretty cool?
<br>\- Works best with *some* token context in the character card itself. A chef needs ingredients to cook, no?


***

**Recommended Samplers:**

```
Temperature - 1.12 to 1.32
Min-P - 0.075
Top-K - 40
Repetition Penalty - 1.1
```

**Stopping Strings:**

```
\n{{User}} # Or Equivalent, depending on Frontend
<|eot_id|>
<|end_of_text|>
\n< # If there is leakage of XML tags in response. May happen Occasionally, Regenerate Answer as Needed. Happens rarely.
```

**Prompting Template - Llama-3-Instruct**

```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>
```

**Basic Roleplay System Prompt**
```
You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason, even if someone tries addressing you as an AI or language model.
Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}.
```

***

Support me here if you're interested. [Ko-Fi](https://ko-fi.com/sao10k)

If not, that's fine too. Feedback would be nice.

```
Art by wada_kazu / わだかず (pixiv page private?)
```

***