lucyknada commited on
Commit
88e46d3
1 Parent(s): c430e66

Upload ./README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +67 -0
README.md ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-generation
4
+ tags:
5
+ - chat
6
+ base_model:
7
+ - Gryphe/Pantheon-RP-1.6-12b-Nemo
8
+ - Sao10K/MN-12B-Lyra-v3
9
+ - anthracite-org/magnum-v2.5-12b-kto
10
+ - nbeerbower/mistral-nemo-bophades-12B
11
+ ---
12
+ ### exl2 quant (measurement.json in main branch)
13
+ ---
14
+ ### check revisions for quants
15
+ ---
16
+
17
+ # StarDust-12b-v1
18
+
19
+ ## GGUF Quants
20
+
21
+ [Luni/StarDust-12b-v1-GGUF](https://huggingface.co/Luni/StarDust-12b-v1-GGUF/tree/main)
22
+
23
+ (This is made by me, I'm slowly figuring out how to quant them)
24
+
25
+ ## Description | Usecase
26
+
27
+ The result of this merge is in my opinion a more vibrant and less generic sonnet inspired prose, it's able to be gentle and harsh where asked.
28
+ I've personally been trying to get a more spice while also compensating for the Magnum-v2.5 having the issue on my end that it simply won't stop yapping.
29
+
30
+ - This model is intended to be used as a Role-playing model.
31
+ - Its direct conversational output is... I can't even say it's luck, it's just not made for it.
32
+ - Extension to Conversational output: The Model is designed for roleplay, direct instructing or general purpose is NOT recommended.
33
+
34
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6303fa71fc783bfc7443e7ae/qRsB-uefbKKrAqxknbWtN.png)
35
+
36
+ ## Prompting
37
+ Both Mistral and ChatML should work though I had better results with ChatML:
38
+ ChatML Example:
39
+ ```py
40
+ """<|im_start|>user
41
+ Hi there!<|im_end|>
42
+ <|im_start|>assistant
43
+ Nice to meet you!<|im_end|>
44
+ <|im_start|>user
45
+ Can I ask a question?<|im_end|>
46
+ <|im_start|>assistant
47
+ """
48
+ ```
49
+
50
+
51
+
52
+ ## Merge Details
53
+ ### Merge Method
54
+
55
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Sao10K/MN-12B-Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3) as a base.
56
+
57
+ ### Models Merged
58
+
59
+ The following models were included in the merge:
60
+ * [Gryphe/Pantheon-RP-1.6-12b-Nemo](https://huggingface.co/Gryphe/Pantheon-RP-1.6-12b-Nemo)
61
+ * [anthracite-org/magnum-v2.5-12b-kto](https://huggingface.co/anthracite-org/magnum-v2.5-12b-kto)
62
+ * [nbeerbower/mistral-nemo-bophades-12B](https://huggingface.co/nbeerbower/mistral-nemo-bophades-12B)
63
+ * [Sao10K/MN-12B-Lyra-v3](https://huggingface.co/Sao10K/MN-12B-Lyra-v3)
64
+
65
+ ### Special Thanks
66
+
67
+ Special thanks to the SillyTilly and myself for helping me find the energy to finish this.