gaverfraxz commited on
Commit
6b0271a
1 Parent(s): bd12a1f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -48
README.md CHANGED
@@ -1,48 +1,48 @@
1
- ---
2
- base_model:
3
- - mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
4
- - meta-llama/Meta-Llama-3.1-8B-Instruct
5
- - meta-llama/Meta-Llama-3.1-8B
6
- library_name: transformers
7
- tags:
8
- - mergekit
9
- - merge
10
-
11
- ---
12
- # outputModels
13
-
14
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
-
16
- ## Merge Details
17
- ### Merge Method
18
-
19
- This model was merged using the della merge method using [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) as a base.
20
-
21
- ### Models Merged
22
-
23
- The following models were included in the merge:
24
- * [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated)
25
- * [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
26
-
27
- ### Configuration
28
-
29
- The following YAML configuration was used to produce this model:
30
-
31
- ```yaml
32
- models:
33
- - model: mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
34
- parameters:
35
- weight: 1
36
- - model: meta-llama/Meta-Llama-3.1-8B-Instruct
37
- parameters:
38
- weight: 1
39
- merge_method: della
40
- base_model: meta-llama/Meta-Llama-3.1-8B
41
- parameters:
42
- normalize: false
43
- int8_mask: true
44
- density: 0.7
45
- lambda: 1.1
46
- epsilon: 0.25
47
- dtype: float16
48
- ```
 
1
+ ---
2
+ base_model:
3
+ - mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
4
+ - meta-llama/Meta-Llama-3.1-8B-Instruct
5
+ - meta-llama/Meta-Llama-3.1-8B
6
+ library_name: transformers
7
+ tags:
8
+ - mergekit
9
+ - merge
10
+ license: llama3.1
11
+ ---
12
+ # outputModels
13
+
14
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
15
+
16
+ ## Merge Details
17
+ ### Merge Method
18
+
19
+ This model was merged using the della merge method using [meta-llama/Meta-Llama-3.1-8B](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B) as a base.
20
+
21
+ ### Models Merged
22
+
23
+ The following models were included in the merge:
24
+ * [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated)
25
+ * [meta-llama/Meta-Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct)
26
+
27
+ ### Configuration
28
+
29
+ The following YAML configuration was used to produce this model:
30
+
31
+ ```yaml
32
+ models:
33
+ - model: mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated
34
+ parameters:
35
+ weight: 1
36
+ - model: meta-llama/Meta-Llama-3.1-8B-Instruct
37
+ parameters:
38
+ weight: 1
39
+ merge_method: della
40
+ base_model: meta-llama/Meta-Llama-3.1-8B
41
+ parameters:
42
+ normalize: false
43
+ int8_mask: true
44
+ density: 0.7
45
+ lambda: 1.1
46
+ epsilon: 0.25
47
+ dtype: float16
48
+ ```