evanchuaa commited on
Commit
a5f20d8
1 Parent(s): b53ae82

Training completed

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: distilroberta-base
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - conll2003
8
+ metrics:
9
+ - precision
10
+ - recall
11
+ - f1
12
+ - accuracy
13
+ model-index:
14
+ - name: RoBERTa_conll_learning_rate1e4
15
+ results:
16
+ - task:
17
+ name: Token Classification
18
+ type: token-classification
19
+ dataset:
20
+ name: conll2003
21
+ type: conll2003
22
+ config: conll2003
23
+ split: validation
24
+ args: conll2003
25
+ metrics:
26
+ - name: Precision
27
+ type: precision
28
+ value: 0.9345188632208742
29
+ - name: Recall
30
+ type: recall
31
+ value: 0.9463143722652305
32
+ - name: F1
33
+ type: f1
34
+ value: 0.94037963040388
35
+ - name: Accuracy
36
+ type: accuracy
37
+ value: 0.9862998987450523
38
+ ---
39
+
40
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
41
+ should probably proofread and complete it, then remove this comment. -->
42
+
43
+ # RoBERTa_conll_learning_rate1e4
44
+
45
+ This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the conll2003 dataset.
46
+ It achieves the following results on the evaluation set:
47
+ - Loss: 0.0665
48
+ - Precision: 0.9345
49
+ - Recall: 0.9463
50
+ - F1: 0.9404
51
+ - Accuracy: 0.9863
52
+
53
+ ## Model description
54
+
55
+ More information needed
56
+
57
+ ## Intended uses & limitations
58
+
59
+ More information needed
60
+
61
+ ## Training and evaluation data
62
+
63
+ More information needed
64
+
65
+ ## Training procedure
66
+
67
+ ### Training hyperparameters
68
+
69
+ The following hyperparameters were used during training:
70
+ - learning_rate: 0.0001
71
+ - train_batch_size: 8
72
+ - eval_batch_size: 8
73
+ - seed: 42
74
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
75
+ - lr_scheduler_type: linear
76
+ - num_epochs: 3
77
+
78
+ ### Training results
79
+
80
+ | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
81
+ |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
82
+ | 0.0909 | 1.0 | 1756 | 0.0778 | 0.8810 | 0.9130 | 0.8967 | 0.9786 |
83
+ | 0.0413 | 2.0 | 3512 | 0.0720 | 0.9242 | 0.9337 | 0.9289 | 0.9838 |
84
+ | 0.0194 | 3.0 | 5268 | 0.0665 | 0.9345 | 0.9463 | 0.9404 | 0.9863 |
85
+
86
+
87
+ ### Framework versions
88
+
89
+ - Transformers 4.41.2
90
+ - Pytorch 2.3.1+cu121
91
+ - Datasets 2.20.0
92
+ - Tokenizers 0.19.1