nonetrix commited on
Commit
ff8da0f
1 Parent(s): a19d09c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -10,6 +10,7 @@ tags:
10
  - mergekit
11
  - merge
12
  - not-for-all-audiences
 
13
  ---
14
  # Pippafeet-11B-0.1
15
  This model is a mix of some of the "best 7B roleplaying LLMs". I selected a few models based on "creativity" from a random benchmark, and a final roleplaying LLM based on "IQ," and finally another LLM merged twice that "excels at general tasks" according to a separate benchmark for it's size. My goal was to combine the "most creative" smaller roleplaying LLMs, merge them, and enhance the intelligence by incorporating "decent general model" twice, along with a "smarter" roleplaying LLM. I don't really trust benchmarks much, but I thought it would at least give it some alignment perhaps, even if it is overfitted to a dataset to score well, I thought since it's a merge so it might negate overfitting somewhat, seems to have worked to some extent luckily.
 
10
  - mergekit
11
  - merge
12
  - not-for-all-audiences
13
+ license: apache-2.0
14
  ---
15
  # Pippafeet-11B-0.1
16
  This model is a mix of some of the "best 7B roleplaying LLMs". I selected a few models based on "creativity" from a random benchmark, and a final roleplaying LLM based on "IQ," and finally another LLM merged twice that "excels at general tasks" according to a separate benchmark for it's size. My goal was to combine the "most creative" smaller roleplaying LLMs, merge them, and enhance the intelligence by incorporating "decent general model" twice, along with a "smarter" roleplaying LLM. I don't really trust benchmarks much, but I thought it would at least give it some alignment perhaps, even if it is overfitted to a dataset to score well, I thought since it's a merge so it might negate overfitting somewhat, seems to have worked to some extent luckily.