RikiyaT commited on
Commit
4869eb4
1 Parent(s): c47e296

End of training

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -1,20 +1,20 @@
1
  ---
2
- base_model: meta-llama/Meta-Llama-3.1-70B
3
  library_name: peft
4
- license: llama3.1
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
- - name: Meta-Llama-3.1-70B-tac08
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
- # Meta-Llama-3.1-70B-tac08
16
 
17
- This model is a fine-tuned version of [meta-llama/Meta-Llama-3.1-70B](https://huggingface.co/meta-llama/Meta-Llama-3.1-70B) on the None dataset.
18
 
19
  ## Model description
20
 
@@ -33,12 +33,12 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 0.0002
37
- - train_batch_size: 1
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - gradient_accumulation_steps: 4
41
- - total_train_batch_size: 4
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 100
@@ -52,7 +52,7 @@ The following hyperparameters were used during training:
52
  ### Framework versions
53
 
54
  - PEFT 0.12.0
55
- - Transformers 4.44.0
56
  - Pytorch 2.4.0+cu121
57
  - Datasets 2.20.0
58
  - Tokenizers 0.19.1
 
1
  ---
2
+ base_model: mistralai/Mistral-Large-Instruct-2407
3
  library_name: peft
4
+ license: other
5
  tags:
6
  - generated_from_trainer
7
  model-index:
8
+ - name: Mistral-Large-Instruct-2407-tac08
9
  results: []
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
  should probably proofread and complete it, then remove this comment. -->
14
 
15
+ # Mistral-Large-Instruct-2407-tac08
16
 
17
+ This model is a fine-tuned version of [mistralai/Mistral-Large-Instruct-2407](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407) on the None dataset.
18
 
19
  ## Model description
20
 
 
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
+ - learning_rate: 0.0004
37
+ - train_batch_size: 4
38
  - eval_batch_size: 8
39
  - seed: 42
40
  - gradient_accumulation_steps: 4
41
+ - total_train_batch_size: 16
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 100
 
52
  ### Framework versions
53
 
54
  - PEFT 0.12.0
55
+ - Transformers 4.44.2
56
  - Pytorch 2.4.0+cu121
57
  - Datasets 2.20.0
58
  - Tokenizers 0.19.1