PocketDoc commited on
Commit
1c9992c
1 Parent(s): 0ba4fb7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -0
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - generated_from_trainer
4
+ model-index:
5
+ - name: Locutusque/TinyMistral-248M-v2
6
+ results: []
7
+ ---
8
+
9
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
+ should probably proofread and complete it, then remove this comment. -->
11
+
12
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
13
+ <details><summary>See axolotl config</summary>
14
+
15
+ axolotl version: `0.3.0`
16
+ ```yaml
17
+ base_model: Locutusque/TinyMistral-248M-v2
18
+ model_type: MistralForCausalLM
19
+ is_mistral_derived_model: true
20
+
21
+ load_in_8bit: false
22
+ load_in_4bit: false
23
+ strict: false
24
+
25
+ dataset_processes: 20
26
+
27
+ datasets:
28
+ - path: epfl-llm/guidelines
29
+ type: completion
30
+ field: clean_text
31
+ - path: JeanKaddour/minipile
32
+ type: completion
33
+ field: text
34
+
35
+ dataset_prepared_path: TinyMistral-FFT-data
36
+ val_set_size: 0.001
37
+ output_dir: ./TinyMistral-FFT
38
+
39
+ sequence_len: 2048
40
+ sample_packing: false
41
+ pad_to_sequence_len: true
42
+
43
+ adapter:
44
+ lora_model_dir:
45
+ lora_r:
46
+ lora_alpha:
47
+ lora_dropout:
48
+ lora_target_linear:
49
+ lora_fan_in_fan_out:
50
+
51
+ # wandb configuration
52
+ wandb_project: TinyMistral-FFT
53
+ wandb_watch:
54
+ wandb_run_id:
55
+ wandb_log_model:
56
+
57
+ gradient_accumulation_steps: 8
58
+ micro_batch_size: 1
59
+ num_epochs: 1
60
+ optimizer: paged_adamw_32bit
61
+ lr_scheduler: constant
62
+ cosine_min_lr_ratio:
63
+
64
+ learning_rate: 0.00005
65
+
66
+ train_on_inputs: true
67
+ group_by_length: false
68
+ bf16: false
69
+ fp16: false
70
+ tf32: true
71
+
72
+ gradient_checkpointing: false
73
+ early_stopping_patience:
74
+ resume_from_checkpoint:
75
+ auto_resume_from_checkpoints: false
76
+ local_rank:
77
+ logging_steps: 1
78
+ xformers_attention:
79
+ flash_attention: false
80
+ flash_attn_cross_entropy: false
81
+ flash_attn_rms_norm: true
82
+ flash_attn_fuse_qkv: false
83
+ flash_attn_fuse_mlp: true
84
+
85
+ warmup_steps: 10
86
+ evals_per_epoch: 100
87
+ # eval_steps: 10
88
+ eval_table_size:
89
+ saves_per_epoch: 50
90
+ debug:
91
+ deepspeed: #deepspeed/zero2.json # multi-gpu only
92
+ weight_decay: 0
93
+
94
+ # tokens:
95
+
96
+
97
+ special_tokens:
98
+ bos_token: "<|bos|>"
99
+ eos_token: "<|endoftext|>"
100
+ unk_token: "<unk>"
101
+ ```
102
+
103
+ </details><br>
104
+
105
+ # TinyMistral-StructureEvaluator
106
+
107
+ This model was trained from scratch on the None dataset.
108
+
109
+ ## Model description
110
+
111
+ More information needed
112
+
113
+ ## Intended uses & limitations
114
+
115
+ More information needed
116
+
117
+ ## Training and evaluation data
118
+
119
+ More information needed
120
+
121
+ ## Training procedure
122
+
123
+ ### Training hyperparameters
124
+
125
+ The following hyperparameters were used during training:
126
+ - learning_rate: 5e-05
127
+ - train_batch_size: 1
128
+ - eval_batch_size: 1
129
+ - seed: 42
130
+ - gradient_accumulation_steps: 8
131
+ - total_train_batch_size: 8
132
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
133
+ - lr_scheduler_type: constant
134
+ - training_steps: 39460
135
+
136
+ ### Training results
137
+
138
+
139
+
140
+ ### Framework versions
141
+
142
+ - Transformers 4.37.0.dev0
143
+ - Pytorch 2.0.1+cu117
144
+ - Datasets 2.15.0
145
+ - Tokenizers 0.15.0