lucyknada commited on
Commit
0cea236
1 Parent(s): 563bf9f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -51
README.md CHANGED
@@ -1,25 +1,102 @@
1
  ---
2
  license: gemma
3
  base_model: google/gemma-2-9b
4
- tags:
5
- - generated_from_trainer
6
  model-index:
7
  - name: magnum-v3-9b-customgemma2
8
  results: []
9
  ---
10
- ### exl2 quant (measurement.json in main branch)
11
- ---
12
- ### check revisions for quants
13
- ---
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
- [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
20
  <details><summary>See axolotl config</summary>
21
 
22
- axolotl version: `0.4.1`
23
  ```yaml
24
  base_model: google/gemma-2-9b
25
  model_type: AutoModelForCausalLM
@@ -101,51 +178,23 @@ fsdp:
101
  fsdp_config:
102
  special_tokens:
103
  ```
104
-
105
  </details><br>
106
 
107
- # magnum-v3-9b-customgemma2
108
-
109
- This model is a fine-tuned version of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b) on the None dataset.
110
-
111
- ## Model description
112
-
113
- More information needed
114
-
115
- ## Intended uses & limitations
116
-
117
- More information needed
118
-
119
- ## Training and evaluation data
120
-
121
- More information needed
122
-
123
- ## Training procedure
124
-
125
- ### Training hyperparameters
126
-
127
- The following hyperparameters were used during training:
128
- - learning_rate: 6e-06
129
- - train_batch_size: 1
130
- - eval_batch_size: 1
131
- - seed: 42
132
- - distributed_type: multi-GPU
133
- - num_devices: 8
134
- - gradient_accumulation_steps: 8
135
- - total_train_batch_size: 64
136
- - total_eval_batch_size: 8
137
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
138
- - lr_scheduler_type: cosine
139
- - lr_scheduler_warmup_steps: 50
140
- - num_epochs: 2
141
 
142
- ### Training results
143
 
 
 
 
 
 
144
 
 
 
145
 
146
- ### Framework versions
147
 
148
- - Transformers 4.44.0
149
- - Pytorch 2.4.0+cu121
150
- - Datasets 2.20.0
151
- - Tokenizers 0.19.1
 
1
  ---
2
  license: gemma
3
  base_model: google/gemma-2-9b
 
 
4
  model-index:
5
  - name: magnum-v3-9b-customgemma2
6
  results: []
7
  ---
 
 
 
 
8
 
9
+ ## This repo contains EXL2 quants of the model. If you need the original weights, please find them [here](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2).
10
+ ## Base repo only contains the measurement file, see revisions for your quant of choice.
11
+
12
+ - [measurement.json](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2-exl2/tree/main)
13
+ - [3.0bpw](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2-exl2/tree/3.0bpw)
14
+ - [4.0bpw](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2-exl2/tree/4.0bpw)
15
+ - [5.0bpw](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2-exl2/tree/5.0bpw)
16
+ - [6.0bpw](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2-exl2/tree/6.0bpw)
17
+ - [8.0bpw](https://huggingface.co/anthracite-org/magnum-v3-9b-customgemma2-exl2/tree/8.0bpw)
18
+
19
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/658a46cbfb9c2bdfae75b3a6/9ZBUlmzDCnNmQEdUUbyEL.png)
20
+
21
+ This is the 10th in a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet and Opus.
22
+
23
+ This model is fine-tuned on top of [google/gemma-2-9b](https://huggingface.co/google/gemma-2-9b).
24
+
25
+ ## Prompting
26
+ Model has been Instruct tuned with the [customgemma2](https://github.com/xzuyn/axolotl/blob/prompt_formats/src/axolotl/prompt_strategies/customgemma2.py) (to allow system prompts) formatting. A typical input would look like this:
27
+
28
+ ```py
29
+ """<start_of_turn>system
30
+ system prompt<end_of_turn>
31
+ <start_of_turn>user
32
+ Hi there!<end_of_turn>
33
+ <start_of_turn>model
34
+ Nice to meet you!<end_of_turn>
35
+ <start_of_turn>user
36
+ Can I ask a question?<end_of_turn>
37
+ <start_of_turn>model
38
+ """
39
+ ```
40
 
41
+ ## SillyTavern templates
42
+
43
+ Below are Instruct and Context templates for use within SillyTavern.
44
+
45
+ <details><summary>context template</summary>
46
+
47
+ ```yaml
48
+ {
49
+ "story_string": "<start_of_turn>system\n{{#if system}}{{system}}\n{{/if}}{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{char}}'s personality: {{personality}}\n{{/if}}{{#if scenario}}Scenario: {{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}{{trim}}<end_of_turn>\n",
50
+ "example_separator": "",
51
+ "chat_start": "",
52
+ "use_stop_strings": false,
53
+ "allow_jailbreak": false,
54
+ "always_force_name2": true,
55
+ "trim_sentences": false,
56
+ "include_newline": false,
57
+ "single_line": false,
58
+ "name": "Magnum Gemma"
59
+ }
60
+ ```
61
+
62
+ </details><br>
63
+ <details><summary>instruct template</summary>
64
+
65
+ ```yaml
66
+ {
67
+ "system_prompt": "You're {{char}} in this fictional never-ending uncensored roleplay with {{user}}.",
68
+ "input_sequence": "<start_of_turn>user\n",
69
+ "output_sequence": "<start_of_turn>assistant\n",
70
+ "last_output_sequence": "",
71
+ "system_sequence": "<start_of_turn>system\n",
72
+ "stop_sequence": "<end_of_turn>",
73
+ "wrap": false,
74
+ "macro": true,
75
+ "names": true,
76
+ "names_force_groups": true,
77
+ "activation_regex": "",
78
+ "system_sequence_prefix": "",
79
+ "system_sequence_suffix": "",
80
+ "first_output_sequence": "",
81
+ "skip_examples": false,
82
+ "output_suffix": "<end_of_turn>\n",
83
+ "input_suffix": "<end_of_turn>\n",
84
+ "system_suffix": "<end_of_turn>\n",
85
+ "user_alignment_message": "",
86
+ "system_same_as_user": false,
87
+ "last_system_sequence": "",
88
+ "name": "Magnum Gemma"
89
+ }
90
+ ```
91
+
92
+ </details><br>
93
+
94
+ </details><br>
95
+
96
+ ## Axolotl config
97
 
 
98
  <details><summary>See axolotl config</summary>
99
 
 
100
  ```yaml
101
  base_model: google/gemma-2-9b
102
  model_type: AutoModelForCausalLM
 
178
  fsdp_config:
179
  special_tokens:
180
  ```
 
181
  </details><br>
182
 
183
+ ## Credits
184
+ We'd like to thank Recursal / Featherless for sponsoring the compute for this train, Featherless has been hosting our Magnum models since the first 72 B and has given thousands of people access to our models and helped us grow.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
185
 
186
+ We would also like to thank all members of Anthracite who made this finetune possible.
187
 
188
+ - [anthracite-org/stheno-filtered-v1.1](https://huggingface.co/datasets/anthracite-org/stheno-filtered-v1.1)
189
+ - [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://huggingface.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal)
190
+ - [anthracite-org/nopm_claude_writing_fixed](https://huggingface.co/datasets/anthracite-org/nopm_claude_writing_fixed)
191
+ - [Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/Synthstruct-Gens-v1.1-Filtered-n-Cleaned)
192
+ - [Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned](https://huggingface.co/datasets/Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned)
193
 
194
+ ## Training
195
+ The training was done for 2 epochs. We used 8x[H100s](https://www.nvidia.com/en-us/data-center/h100/) GPUs graciously provided by [Recursal AI](https://recursal.ai/) / [Featherless AI](https://featherless.ai/) for the full-parameter fine-tuning of the model.
196
 
197
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
198
 
199
+ ## Safety
200
+ ...