nonetrix commited on
Commit
10d5199
1 Parent(s): 4b140a4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -9,9 +9,15 @@ library_name: transformers
9
  tags:
10
  - mergekit
11
  - merge
12
-
13
  ---
14
- # merge
 
 
 
 
 
 
15
 
16
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
17
 
@@ -59,4 +65,4 @@ slices:
59
  dtype: float16
60
  tokenizer_source: model:yam-peleg/Experiment30-7B
61
 
62
- ```
 
9
  tags:
10
  - mergekit
11
  - merge
12
+ - not-for-all-audiences
13
  ---
14
+ # Pippafeet-11B-0.1
15
+ This model is a mix of some of the "best 7B roleplaying LLMs". I selected a few models based on "creativity" from a random benchmark, and a final roleplaying LLM based on "IQ," and finally another LLM merged twice that "excels at general tasks" according to a separate benchmark for it's size. My goal was to combine the "most creative" smaller roleplaying LLMs, merge them, and enhance the intelligence by incorporating "decent general model" twice, along with a "smarter" roleplaying LLM. I don't really trust benchmarks much, but I thought it would at least give it some alignment perhaps, even if it is overfitted to a dataset to score well, I thought since it's a merge so it might negate overfitting somewhat, seems to have worked to some extent luckily.
16
+
17
+ In my limited testing, this model performs really well, giving decent replies most of the time.... That is if you ignore the fatal flaws, which are inherent to how this model was created unfortunately. Since it's made by directly stacking the weights of other models, it likes to constantly create new words and stutter and generally act stange, however if you ignore this and fill in the blanks yourself the model is quite decent. I plan to try to remove this weirdness if possible by using a LoRA but I am not sure if I will be able to, no promisses. If you have the compute to fine tune this model, I emplore you to because I think it is a promissing base.
18
+
19
+ Artwork source: https://twitter.com/Kumaartsu/status/1756793643384402070
20
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6342619a9948f573f37a4a60/MUD4N762ncyUw2dPfzVJ_.png)
21
 
22
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
23
 
 
65
  dtype: float16
66
  tokenizer_source: model:yam-peleg/Experiment30-7B
67
 
68
+ ```