CyrexPro commited on
Commit
00dd555
1 Parent(s): 6fe518e

Model save

Browse files
README.md ADDED
@@ -0,0 +1,72 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/pegasus-large
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - rouge
7
+ model-index:
8
+ - name: pegasus-large-finetuned-cnn_dailymail
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # pegasus-large-finetuned-cnn_dailymail
16
+
17
+ This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.5155
20
+ - Rouge1: 53.2323
21
+ - Rouge2: 38.6836
22
+ - Rougel: 41.8756
23
+ - Rougelsum: 50.7526
24
+ - Bleu 1: 39.2946
25
+ - Bleu 2: 33.2337
26
+ - Bleu 3: 30.2125
27
+ - Meteor: 40.4525
28
+ - Compression rate: 1.4202
29
+
30
+ ## Model description
31
+
32
+ More information needed
33
+
34
+ ## Intended uses & limitations
35
+
36
+ More information needed
37
+
38
+ ## Training and evaluation data
39
+
40
+ More information needed
41
+
42
+ ## Training procedure
43
+
44
+ ### Training hyperparameters
45
+
46
+ The following hyperparameters were used during training:
47
+ - learning_rate: 5.6e-05
48
+ - train_batch_size: 2
49
+ - eval_batch_size: 2
50
+ - seed: 42
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - num_epochs: 6
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleu 1 | Bleu 2 | Bleu 3 | Meteor | Compression rate |
58
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:|:-------:|:-------:|:----------------:|
59
+ | 1.1676 | 1.0 | 5000 | 0.4626 | 58.2243 | 46.8729 | 49.6754 | 56.3425 | 45.8036 | 41.0494 | 38.8167 | 47.5864 | 1.3223 |
60
+ | 0.8474 | 2.0 | 10000 | 0.4951 | 51.7813 | 37.5081 | 40.999 | 49.3373 | 38.9702 | 32.8144 | 30.0222 | 39.7542 | 1.3725 |
61
+ | 0.7632 | 3.0 | 15000 | 0.4712 | 54.9872 | 41.6279 | 44.557 | 52.6927 | 42.0867 | 36.3443 | 33.5877 | 43.0071 | 1.3649 |
62
+ | 0.7009 | 4.0 | 20000 | 0.4875 | 54.5016 | 40.85 | 44.0557 | 52.0705 | 40.2939 | 34.6751 | 31.8994 | 41.8203 | 1.4397 |
63
+ | 0.6563 | 5.0 | 25000 | 0.5036 | 52.3997 | 37.6472 | 41.0743 | 49.8349 | 38.2882 | 32.1617 | 29.1582 | 39.4024 | 1.441 |
64
+ | 0.6274 | 6.0 | 30000 | 0.5155 | 53.2323 | 38.6836 | 41.8756 | 50.7526 | 39.2946 | 33.2337 | 30.2125 | 40.4525 | 1.4202 |
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - Transformers 4.40.0
70
+ - Pytorch 2.2.2+cu118
71
+ - Datasets 2.19.0
72
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "forced_eos_token_id": 1,
6
+ "length_penalty": 0.8,
7
+ "max_length": 256,
8
+ "num_beams": 8,
9
+ "pad_token_id": 0,
10
+ "transformers_version": "4.40.0"
11
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3fd0850beadabddd5a0ea8e3aff57188c93823492a3ec6f9d12696e4b5de95cd
3
  size 2283652852
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04d12f629b9a4663c6fd0824dd3675c6200b14cae06ee21aed19076b73aebab7
3
  size 2283652852
runs/Apr28_01-10-22_DESKTOP-I570M0U/events.out.tfevents.1714255824.DESKTOP-I570M0U.2571.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9761660ce86c2087971118f33318b0b7cfba0ddfd6d8cd143dadc16a479b8f56
3
- size 12096
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3cbe3fc1a5373ad39f0b52927282d3a50651cefb045c0020ec11e32c73cfdb90
3
+ size 13419