jingyeom commited on
Commit
a209a32
1 Parent(s): 7357259

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +0 -30
README.md CHANGED
@@ -19,36 +19,6 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) on the generator dataset.
21
 
22
- ## Model description
23
-
24
- More information needed
25
-
26
- ## Intended uses & limitations
27
-
28
- More information needed
29
-
30
- ## Training and evaluation data
31
-
32
- More information needed
33
-
34
- ## Training procedure
35
-
36
- ### Training hyperparameters
37
-
38
- The following hyperparameters were used during training:
39
- - learning_rate: 1e-06
40
- - train_batch_size: 1
41
- - eval_batch_size: 8
42
- - seed: 42
43
- - distributed_type: multi-GPU
44
- - num_devices: 7
45
- - gradient_accumulation_steps: 16
46
- - total_train_batch_size: 112
47
- - total_eval_batch_size: 56
48
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
- - lr_scheduler_type: cosine
50
- - lr_scheduler_warmup_ratio: 0.03
51
- - num_epochs: 2
52
 
53
  ### Training results
54
 
 
19
 
20
  This model is a fine-tuned version of [LDCC/LDCC-SOLAR-10.7B](https://huggingface.co/LDCC/LDCC-SOLAR-10.7B) on the generator dataset.
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
  ### Training results
24