pabRomero commited on
Commit
468b626
1 Parent(s): 17a54bb

Training complete

Browse files
Files changed (1) hide show
  1. README.md +11 -9
README.md CHANGED
@@ -21,11 +21,11 @@ should probably proofread and complete it, then remove this comment. -->
21
 
22
  This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.0883
25
- - Precision: 0.7726
26
- - Recall: 0.7639
27
- - F1: 0.7682
28
- - Accuracy: 0.9735
29
 
30
  ## Model description
31
 
@@ -51,20 +51,22 @@ The following hyperparameters were used during training:
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
- - num_epochs: 2
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Training results
58
 
59
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
60
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
61
- | No log | 1.0 | 231 | 0.1049 | 0.7242 | 0.7475 | 0.7356 | 0.9691 |
62
- | No log | 2.0 | 462 | 0.0883 | 0.7726 | 0.7639 | 0.7682 | 0.9735 |
 
 
63
 
64
 
65
  ### Framework versions
66
 
67
  - Transformers 4.44.2
68
- - Pytorch 2.4.0+cu121
69
  - Datasets 2.21.0
70
  - Tokenizers 0.19.1
 
21
 
22
  This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the None dataset.
23
  It achieves the following results on the evaluation set:
24
+ - Loss: 0.0854
25
+ - Precision: 0.7857
26
+ - Recall: 0.7899
27
+ - F1: 0.7878
28
+ - Accuracy: 0.9747
29
 
30
  ## Model description
31
 
 
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 4
55
  - mixed_precision_training: Native AMP
56
 
57
  ### Training results
58
 
59
  | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
60
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
61
+ | No log | 1.0 | 231 | 0.1015 | 0.7485 | 0.7440 | 0.7462 | 0.9703 |
62
+ | No log | 2.0 | 462 | 0.0878 | 0.7618 | 0.7750 | 0.7684 | 0.9728 |
63
+ | 0.2646 | 3.0 | 693 | 0.0859 | 0.7759 | 0.7912 | 0.7835 | 0.9737 |
64
+ | 0.2646 | 4.0 | 924 | 0.0854 | 0.7857 | 0.7899 | 0.7878 | 0.9747 |
65
 
66
 
67
  ### Framework versions
68
 
69
  - Transformers 4.44.2
70
+ - Pytorch 2.4.1+cu121
71
  - Datasets 2.21.0
72
  - Tokenizers 0.19.1