lapp0 commited on
Commit
74537bd
1 Parent(s): 846aa58

End of training

Browse files
README.md CHANGED
@@ -16,13 +16,13 @@ This student model is distilled from the teacher model [gpt2](https://huggingfac
16
  The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
 
18
  It achieves the following results on the evaluation set:
19
- - eval_enwikippl: 201.1306
20
- - eval_frwikippl: 1264.6479
21
- - eval_zhwikippl: 692.1948
22
- - eval_loss: 1.2818
23
- - eval_runtime: 17.7588
24
- - eval_samples_per_second: 56.31
25
- - eval_steps_per_second: 7.039
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment.
@@ -45,7 +45,7 @@ More information needed
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
- - distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=2.0, loss_fn=jsd, layer_mapper=None, projector=None))
49
  - train_embeddings: True
50
  - learning_rate: 4e-05
51
  - train_batch_size: 8
@@ -62,20 +62,20 @@ Peak GPU Memory: 8.2195 GB
62
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
63
  | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
  | **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
65
- | 0 | 0 | 55429.6875 | 57698.8047 | 6.0515 | 17.8022 | 56.173 | 7.022 | 56988.9141 |
66
- | 1000 | 0.0808 | 674.2750 | 4349.4961 | 1.9954 | 17.6958 | 56.51 | 7.064 | 19961.1016 |
67
- | 2000 | 0.1616 | 486.4362 | 3202.9236 | 1.8123 | 17.6829 | 56.552 | 7.069 | 1855.9937 |
68
- | 3000 | 0.2424 | 398.6795 | 2596.3247 | 1.6971 | 17.7713 | 56.27 | 7.034 | 975.8663 |
69
- | 4000 | 0.3232 | 350.7618 | 2375.6218 | 1.6039 | 17.792 | 56.205 | 7.026 | 869.2946 |
70
- | 5000 | 0.4040 | 302.1302 | 1985.2614 | 1.5168 | 17.7421 | 56.363 | 7.045 | 967.0451 |
71
- | 6000 | 0.4848 | 263.9246 | 1671.2548 | 1.4466 | 17.7779 | 56.25 | 7.031 | 822.3207 |
72
- | 7000 | 0.5657 | 242.7309 | 1513.9550 | 1.3874 | 17.8314 | 56.081 | 7.01 | 750.5385 |
73
- | 8000 | 0.6465 | 221.2715 | 1384.2833 | 1.3367 | 17.7638 | 56.294 | 7.037 | 824.5199 |
74
- | 9000 | 0.7273 | 201.1306 | 1264.6479 | 1.2818 | 17.7588 | 56.31 | 7.039 | 692.1948 |
75
- | 10000 | 0.8081 | 184.5633 | 1112.0341 | 1.2357 | 17.6966 | 56.508 | 7.064 | 578.3190 |
76
- | 11000 | 0.8889 | 171.4912 | 1108.7455 | 1.1873 | 17.8206 | 56.115 | 7.014 | 545.0269 |
77
- | 12000 | 0.9697 | 156.4515 | 982.5362 | 1.1465 | 17.741 | 56.367 | 7.046 | 586.4849 |
78
- | 12375 | 1.0 | 154.3638 | 955.2133 | 1.1337 | 17.7532 | 56.328 | 7.041 | 598.8307 |
79
 
80
  ### Framework versions
81
  - Distily 0.2.0
 
16
  The [Distily](https://github.com/lapp0/distily) library was used for this distillation.
17
 
18
  It achieves the following results on the evaluation set:
19
+ - eval_enwikippl: 207.9922
20
+ - eval_frwikippl: 1314.4666
21
+ - eval_zhwikippl: 759.8159
22
+ - eval_loss: 1.3326
23
+ - eval_runtime: 17.3702
24
+ - eval_samples_per_second: 57.57
25
+ - eval_steps_per_second: 7.196
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  should probably proofread and complete it, then remove this comment.
 
45
  ### Training hyperparameters
46
 
47
  The following hyperparameters were used during training:
48
+ - distillation_objective: DistillationObjective(logits_loss_component=LossComponent(label=logits, weight=1, loss_fn=kl, layer_mapper=None, projector=None), hs_loss_component=LossComponent(label=hs, weight=0, loss_fn=None, layer_mapper=None, projector=None), attn_loss_component=LossComponent(label=attn, weight=2.0, loss_fn=reverse_kl, layer_mapper=None, projector=None))
49
  - train_embeddings: True
50
  - learning_rate: 4e-05
51
  - train_batch_size: 8
 
62
  | step | epoch | enwikippl | frwikippl | loss | runtime | samples_per_second | steps_per_second | zhwikippl |
63
  | --- | --- | --- | --- | --- | --- | --- | --- | --- |
64
  | **teacher eval** | | 30.2086 | 57.2728 | | | | | 18.1784 |
65
+ | 0 | 0 | 55429.6875 | 57698.8047 | 6.1518 | 17.3225 | 57.728 | 7.216 | 56988.9141 |
66
+ | 1000 | 0.0808 | 693.0135 | 4581.8110 | 2.0460 | 17.3292 | 57.706 | 7.213 | 22366.3984 |
67
+ | 2000 | 0.1616 | 504.2434 | 3241.0867 | 1.8627 | 17.42 | 57.405 | 7.176 | 1925.1605 |
68
+ | 3000 | 0.2424 | 416.2050 | 2635.7954 | 1.7568 | 17.2717 | 57.898 | 7.237 | 924.6143 |
69
+ | 4000 | 0.3232 | 367.7481 | 2426.7476 | 1.6637 | 17.2866 | 57.848 | 7.231 | 843.0013 |
70
+ | 5000 | 0.4040 | 314.2136 | 2124.5867 | 1.5737 | 17.3864 | 57.516 | 7.19 | 970.9272 |
71
+ | 6000 | 0.4848 | 274.5013 | 1727.5643 | 1.5012 | 17.3269 | 57.714 | 7.214 | 815.5406 |
72
+ | 7000 | 0.5657 | 250.4276 | 1508.2014 | 1.4380 | 17.3171 | 57.747 | 7.218 | 763.2737 |
73
+ | 8000 | 0.6465 | 227.7920 | 1387.4103 | 1.3836 | 17.3674 | 57.579 | 7.197 | 706.1053 |
74
+ | 9000 | 0.7273 | 207.9922 | 1314.4666 | 1.3326 | 17.3702 | 57.57 | 7.196 | 759.8159 |
75
+ | 10000 | 0.8081 | 190.8745 | 1171.5941 | 1.2857 | 17.3634 | 57.592 | 7.199 | 598.8307 |
76
+ | 11000 | 0.8889 | 175.8197 | 1119.1125 | 1.2359 | 17.3533 | 57.626 | 7.203 | 493.6122 |
77
+ | 12000 | 0.9697 | 159.0854 | 1000.5724 | 1.1915 | 17.3916 | 57.499 | 7.187 | 562.3265 |
78
+ | 12375 | 1.0 | 157.0114 | 957.9113 | 1.1794 | 17.3573 | 57.613 | 7.202 | 671.0787 |
79
 
80
  ### Framework versions
81
  - Distily 0.2.0
logs/attn_loss_fn=reverse_kl, attn_weight=2.0/events.out.tfevents.1723676703.93d6cbb3ad53 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78a78d3d78b6ebd9c78b529710340daeb08b05b009af92ddf5a7db4624d56adc
3
+ size 249