aashish1904 commited on
Commit
2280317
1 Parent(s): fffd629

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +188 -0
README.md ADDED
@@ -0,0 +1,188 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ library_name: transformers
5
+ base_model: FourOhFour/Magic_v2_8B
6
+ tags:
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: outputs/out
10
+ results: []
11
+
12
+ ---
13
+
14
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
15
+
16
+
17
+ # QuantFactory/Fatgirl_v2_8B-GGUF
18
+ This is quantized version of [FourOhFour/Fatgirl_v2_8B](https://huggingface.co/FourOhFour/Fatgirl_v2_8B) created using llama.cpp
19
+
20
+ # Original Model Card
21
+
22
+
23
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
24
+ should probably proofread and complete it, then remove this comment. -->
25
+
26
+ [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
27
+ <details><summary>See axolotl config</summary>
28
+
29
+ axolotl version: `0.4.1`
30
+ ```yaml
31
+ base_model: FourOhFour/Magic_v2_8B
32
+ model_type: AutoModelForCausalLM
33
+ tokenizer_type: AutoTokenizer
34
+
35
+ load_in_8bit: false
36
+ load_in_4bit: false
37
+ strict: false
38
+
39
+ datasets:
40
+ - path: Epiculous/SynthRP-Gens-v1.1-Filtered-n-Cleaned
41
+ type: sharegpt
42
+ conversation: chatml
43
+ - path: ResplendentAI/bluemoon
44
+ type: sharegpt
45
+ conversation: chatml
46
+ - path: openerotica/freedom-rp
47
+ type: sharegpt
48
+ conversation: chatml
49
+ - path: MinervaAI/Aesir-Preview
50
+ type: sharegpt
51
+ conversation: chatml
52
+ - path: anthracite-core/c2_logs_32k_v1.1
53
+ type: sharegpt
54
+ conversation: chatml
55
+ - path: Nitral-AI/Creative_Writing-ShareGPT
56
+ type: sharegpt
57
+ conversation: chatml
58
+ - path: PJMixers/lodrick-the-lafted_OpusStories-Story2Prompt-ShareGPT
59
+ type: sharegpt
60
+ conversation: chatml
61
+
62
+ chat_template: chatml
63
+
64
+ val_set_size: 0.002
65
+ output_dir: ./outputs/out
66
+
67
+ adapter:
68
+ lora_r:
69
+ lora_alpha:
70
+ lora_dropout:
71
+ lora_target_linear:
72
+
73
+ sequence_len: 8192
74
+ sample_packing: true
75
+ eval_sample_packing: false
76
+ pad_to_sequence_len: true
77
+
78
+ plugins:
79
+ - axolotl.integrations.liger.LigerPlugin
80
+ liger_rope: true
81
+ liger_rms_norm: true
82
+ liger_swiglu: true
83
+ liger_fused_linear_cross_entropy: true
84
+
85
+ wandb_project: mini8B
86
+ wandb_entity:
87
+ wandb_watch:
88
+ wandb_name: mini8B
89
+ wandb_log_model:
90
+
91
+ gradient_accumulation_steps: 8
92
+ micro_batch_size: 2
93
+ num_epochs: 2
94
+ optimizer: adamw_bnb_8bit
95
+ lr_scheduler: cosine
96
+ learning_rate: 0.00001
97
+ weight_decay: 0.05
98
+
99
+ train_on_inputs: false
100
+ group_by_length: false
101
+ bf16: auto
102
+ fp16:
103
+ tf32: true
104
+
105
+ gradient_checkpointing: true
106
+ early_stopping_patience:
107
+ resume_from_checkpoint:
108
+ local_rank:
109
+ logging_steps: 1
110
+ xformers_attention:
111
+ flash_attention: true
112
+
113
+ warmup_ratio: 0.1
114
+ evals_per_epoch: 4
115
+ eval_table_size:
116
+ eval_max_new_tokens: 128
117
+ saves_per_epoch: 2
118
+
119
+ debug:
120
+ deepspeed: deepspeed_configs/zero3_bf16.json
121
+ fsdp:
122
+ fsdp_config:
123
+
124
+ special_tokens:
125
+ pad_token: <pad>
126
+
127
+ ```
128
+
129
+ </details><br>
130
+
131
+ # outputs/out
132
+
133
+ This model is a fine-tuned version of [FourOhFour/Magic_v2_8B](https://huggingface.co/FourOhFour/Magic_v2_8B) on the None dataset.
134
+ It achieves the following results on the evaluation set:
135
+ - Loss: 2.6845
136
+
137
+ ## Model description
138
+
139
+ More information needed
140
+
141
+ ## Intended uses & limitations
142
+
143
+ More information needed
144
+
145
+ ## Training and evaluation data
146
+
147
+ More information needed
148
+
149
+ ## Training procedure
150
+
151
+ ### Training hyperparameters
152
+
153
+ The following hyperparameters were used during training:
154
+ - learning_rate: 1e-05
155
+ - train_batch_size: 2
156
+ - eval_batch_size: 2
157
+ - seed: 42
158
+ - distributed_type: multi-GPU
159
+ - num_devices: 2
160
+ - gradient_accumulation_steps: 8
161
+ - total_train_batch_size: 32
162
+ - total_eval_batch_size: 4
163
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
164
+ - lr_scheduler_type: cosine
165
+ - lr_scheduler_warmup_steps: 58
166
+ - num_epochs: 2
167
+
168
+ ### Training results
169
+
170
+ | Training Loss | Epoch | Step | Validation Loss |
171
+ |:-------------:|:------:|:----:|:---------------:|
172
+ | 1.7471 | 0.0034 | 1 | 2.8918 |
173
+ | 1.5602 | 0.2507 | 74 | 2.7319 |
174
+ | 1.4587 | 0.5015 | 148 | 2.6953 |
175
+ | 1.5022 | 0.7522 | 222 | 2.6729 |
176
+ | 1.4152 | 1.0030 | 296 | 2.6487 |
177
+ | 1.2528 | 1.2501 | 370 | 2.6922 |
178
+ | 1.2245 | 1.5002 | 444 | 2.6843 |
179
+ | 1.2803 | 1.7503 | 518 | 2.6845 |
180
+
181
+
182
+ ### Framework versions
183
+
184
+ - Transformers 4.45.0.dev0
185
+ - Pytorch 2.4.0+cu121
186
+ - Datasets 2.21.0
187
+ - Tokenizers 0.19.1
188
+