Kquant03 commited on
Commit
d4fa435
1 Parent(s): f71e73d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -7
README.md CHANGED
@@ -1,22 +1,42 @@
1
  ---
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  language:
4
  - en
5
- tags:
6
- - merge
7
  ---
8
 
9
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/IHjiqZF5wGDzXPkEIAjTt.png)
10
- # To see what will happen
11
 
12
  [Join our Discord!](https://discord.gg/aEGuFph9)
13
 
14
  [BASE MODEL HERE](https://huggingface.co/Kquant03/Samlagast-7B-bf16)
15
 
16
- A merge using the task_arithmetic method of merging. This is a method my team and I haven't experimented with recently, hopefully this will work well.
 
 
 
 
17
 
18
- You can see the merge script here:
19
- ```models:
 
 
 
 
 
 
 
 
 
 
 
20
  - model: paulml/NeuralOmniWestBeaglake-7B
21
  parameters:
22
  weight: 1
@@ -35,3 +55,5 @@ parameters:
35
  normalize: true
36
  int8_mask: true
37
  dtype: float16
 
 
 
1
  ---
2
+ base_model:
3
+ - flemmingmiguel/MBX-7B-v3
4
+ - paulml/NeuralOmniWestBeaglake-7B
5
+ - FelixChao/Faraday-7B
6
+ - paulml/NeuralOmniBeagleMBX-v3-7B
7
+ tags:
8
+ - mergekit
9
+ - merge
10
  license: apache-2.0
11
  language:
12
  - en
 
 
13
  ---
14
 
15
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6589d7e6586088fd2784a12c/eDLmpTkM4vuk8HiQcUzWv.png)
 
16
 
17
  [Join our Discord!](https://discord.gg/aEGuFph9)
18
 
19
  [BASE MODEL HERE](https://huggingface.co/Kquant03/Samlagast-7B-bf16)
20
 
21
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
22
+
23
+ ### Merge Method
24
+
25
+ This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [paulml/NeuralOmniBeagleMBX-v3-7B](https://huggingface.co/paulml/NeuralOmniBeagleMBX-v3-7B) as a base.
26
 
27
+ ### Models Merged
28
+
29
+ The following models were included in the merge:
30
+ * [flemmingmiguel/MBX-7B-v3](https://huggingface.co/flemmingmiguel/MBX-7B-v3)
31
+ * [paulml/NeuralOmniWestBeaglake-7B](https://huggingface.co/paulml/NeuralOmniWestBeaglake-7B)
32
+ * [FelixChao/Faraday-7B](https://huggingface.co/FelixChao/Faraday-7B)
33
+
34
+ ### Configuration
35
+
36
+ The following YAML configuration was used to produce this model:
37
+
38
+ ```yaml
39
+ models:
40
  - model: paulml/NeuralOmniWestBeaglake-7B
41
  parameters:
42
  weight: 1
 
55
  normalize: true
56
  int8_mask: true
57
  dtype: float16
58
+
59
+ ```