Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,13 @@
|
|
1 |
---
|
2 |
base_model:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
- v000000/L3.1-Celestial-Stone-2x8B
|
4 |
library_name: transformers
|
5 |
tags:
|
@@ -9,7 +17,13 @@ tags:
|
|
9 |
- dpo
|
10 |
---
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/lyRa7z5maTqAaa43sxC2J.png)
|
15 |
|
@@ -19,4 +33,30 @@ tags:
|
|
19 |
|
20 |
[L3.1-Celestial-Stone-2x8B](https://huggingface.co/v000000/L3.1-Celestial-Stone-2x8B) Finetuned on Nvidia A100.
|
21 |
|
22 |
-
0.5 Epoch completed of dataset [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) with learning_rate=8e-6
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
base_model:
|
3 |
+
- nothingiisreal/L3.1-8B-Celeste-V1.5
|
4 |
+
- Sao10K/Llama-3.1-8B-Stheno-v3.4
|
5 |
+
- Sao10K/L3.1-8B-Niitama-v1.1
|
6 |
+
- arcee-ai/Llama-3.1-SuperNova-Lite
|
7 |
+
- akjindal53244/Llama-3.1-Storm-8B
|
8 |
+
- arcee-ai/Llama-Spark
|
9 |
+
- grimjim/Llama-3-Instruct-abliteration-LoRA-8B
|
10 |
+
- crestf411/sunfall-peft
|
11 |
- v000000/L3.1-Celestial-Stone-2x8B
|
12 |
library_name: transformers
|
13 |
tags:
|
|
|
17 |
- dpo
|
18 |
---
|
19 |
|
20 |
+
> [!WARNING]
|
21 |
+
> **Content:**<br>
|
22 |
+
> This models output's can be a bit unhinged.
|
23 |
+
|
24 |
+
# Llama-3.1-Celestial-Stone-2x8B-DPO (BF16)
|
25 |
+
|
26 |
+
* *Mixture of Experts (14B).*
|
27 |
|
28 |
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/lyRa7z5maTqAaa43sxC2J.png)
|
29 |
|
|
|
33 |
|
34 |
[L3.1-Celestial-Stone-2x8B](https://huggingface.co/v000000/L3.1-Celestial-Stone-2x8B) Finetuned on Nvidia A100.
|
35 |
|
36 |
+
0.5 Epoch completed of dataset [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1) with learning_rate=8e-6
|
37 |
+
|
38 |
+
------------------------------------------------------------------------------
|
39 |
+
|
40 |
+
*The first expert* is Instruct 405B distillation/RP vector merge <b>(Supernova-Lite, Niitama1.1, Storm)</b>
|
41 |
+
|
42 |
+
*The second expert* is ERP/Reddit data merge <b>(Celeste1.5, Stheno3.4, Storm)</b>
|
43 |
+
|
44 |
+
-------------------------------------------------------------------------------
|
45 |
+
|
46 |
+
*The base model* is <b>Sao10k/L3.1-Stheno-3.4</b> with the <b>Sunfall LoRa 0.6.1</b> to make it understand SillyTavern prompts and storywriting better.
|
47 |
+
|
48 |
+
-------------------------------------------------------------------------------
|
49 |
+
|
50 |
+
*Finetuned* on [jondurbin/gutenberg-dpo-v0.1](https://huggingface.co/datasets/jondurbin/gutenberg-dpo-v0.1).
|
51 |
+
|
52 |
+
# Prompt Template:
|
53 |
+
```bash
|
54 |
+
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
|
55 |
+
|
56 |
+
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
|
57 |
+
|
58 |
+
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
|
59 |
+
|
60 |
+
{output}<|eot_id|>
|
61 |
+
|
62 |
+
```
|