grimjim commited on
Commit
97126c3
1 Parent(s): ce5731a

Update README.md

Browse files

Updated model name and links

Files changed (1) hide show
  1. README.md +58 -55
README.md CHANGED
@@ -1,55 +1,58 @@
1
- ---
2
- base_model:
3
- - grimjim/Mistral-Starling-merge-trial1-7B
4
- - grimjim/kukulemon-7B
5
- library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
- license: cc-by-nc-4.0
10
- pipeline_tag: text-generation
11
- ---
12
- # cuckoo-starling-7B
13
-
14
- For this merged model, rope theta was in config.json was manually adjusted down to 100K, a value less than 1M as initially released by Mistral for v0.2, but higher than the 10K that accompanied practical 8K context for v0.1. We idly conjecture that 1M rope theta might improve performance for needle-in-a-haystack queries; however, during informal testing, narrative coherence seemed to occasionally suffer under 1M rope theta. Furthermore, the results reported in the arXiv paper [Scaling Laws of RoPE-based Extrapolation](https://arxiv.org/abs/2310.05209) suggest that 1M rope theta may be overkill for a 32K token context window.
15
-
16
- Lightly tested with temperature 0.9-1.0 and minP 0.02, using ChatML prompts. The model natively supports Alpaca prompts.
17
-
18
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
19
-
20
- ## Merge Details
21
- ### Merge Method
22
-
23
- This model was merged using the SLERP merge method.
24
-
25
- ### Models Merged
26
-
27
- The following models were included in the merge:
28
- * [grimjim/Mistral-Starling-merge-trial1-7B](https://huggingface.co/grimjim/Mistral-Starling-merge-trial1-7B)
29
- * [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)
30
-
31
- ### Configuration
32
-
33
- The following YAML configuration was used to produce this model:
34
-
35
- ```yaml
36
- slices:
37
- - sources:
38
- - model: grimjim/Mistral-Starling-merge-trial1-7B
39
- layer_range: [0, 32]
40
- - model: grimjim/kukulemon-7B
41
- layer_range: [0, 32]
42
- # or, the equivalent models: syntax:
43
- # models:
44
- merge_method: slerp
45
- base_model: grimjim/Mistral-Starling-merge-trial1-7B
46
- parameters:
47
- t:
48
- - filter: self_attn
49
- value: [0, 0.5, 0.3, 0.7, 1]
50
- - filter: mlp
51
- value: [1, 0.5, 0.7, 0.3, 0]
52
- - value: 0.5 # fallback for rest of tensors
53
- dtype: bfloat16
54
-
55
- ```
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - grimjim/Mistral-Starling-merge-trial1-7B
4
+ - grimjim/kukulemon-7B
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ license: cc-by-nc-4.0
10
+ pipeline_tag: text-generation
11
+ ---
12
+ # cuckoo-starling-32k-7B
13
+
14
+ For this merged model, rope theta was in config.json was manually adjusted down to 100K, a value less than 1M as initially released by Mistral for v0.2, but higher than the 10K that accompanied practical 8K context for v0.1. We idly conjecture that 1M rope theta might improve performance for needle-in-a-haystack queries; however, during informal testing, narrative coherence seemed to occasionally suffer under 1M rope theta. Furthermore, the results reported in the arXiv paper [Scaling Laws of RoPE-based Extrapolation](https://arxiv.org/abs/2310.05209) suggest that 1M rope theta may be overkill for a 32K token context window.
15
+
16
+ Lightly tested with temperature 0.9-1.0 and minP 0.02, using ChatML prompts. The model natively supports Alpaca prompts.
17
+
18
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
19
+
20
+ Full weights: [grimjim/cuckoo-starling-32k-7B](https://huggingface.co/grimjim/cuckoo-starling-32k-7B/)
21
+ GGUFs: [grimjim/cuckoo-starling-32k-7B-GGUF](https://huggingface.co/grimjim/cuckoo-starling-32k-7B-GGUF/)
22
+
23
+ ## Merge Details
24
+ ### Merge Method
25
+
26
+ This model was merged using the SLERP merge method.
27
+
28
+ ### Models Merged
29
+
30
+ The following models were included in the merge:
31
+ * [grimjim/Mistral-Starling-merge-trial1-7B](https://huggingface.co/grimjim/Mistral-Starling-merge-trial1-7B)
32
+ * [grimjim/kukulemon-7B](https://huggingface.co/grimjim/kukulemon-7B)
33
+
34
+ ### Configuration
35
+
36
+ The following YAML configuration was used to produce this model:
37
+
38
+ ```yaml
39
+ slices:
40
+ - sources:
41
+ - model: grimjim/Mistral-Starling-merge-trial1-7B
42
+ layer_range: [0, 32]
43
+ - model: grimjim/kukulemon-7B
44
+ layer_range: [0, 32]
45
+ # or, the equivalent models: syntax:
46
+ # models:
47
+ merge_method: slerp
48
+ base_model: grimjim/Mistral-Starling-merge-trial1-7B
49
+ parameters:
50
+ t:
51
+ - filter: self_attn
52
+ value: [0, 0.5, 0.3, 0.7, 1]
53
+ - filter: mlp
54
+ value: [1, 0.5, 0.7, 0.3, 0]
55
+ - value: 0.5 # fallback for rest of tensors
56
+ dtype: bfloat16
57
+
58
+ ```