metadata
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-goemotions-15epochs-run2
results: []
bert-goemotions-15epochs-run2
This model is a fine-tuned version of distilbert-base-uncased on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 0.1106
- Accuracy Thresh: 0.9619
- F1 weighted: {'f1': 0.3982045872720165}
- F1 macro: {'f1': 0.31538372135978}
- Accuracy: {'accuracy': 0.4170647653000594}
- Recall weighted: {'recall': 0.4170647653000594}
- Recall macro: {'recall': 0.32808068442725}
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy Thresh | F1 weighted | F1 macro | Accuracy | Recall weighted | Recall macro |
---|---|---|---|---|---|---|---|---|---|
0.1259 | 1.0 | 5286 | 0.1127 | 0.9617 | {'f1': 0.3890983386624071} | {'f1': 0.3007761920553838} | {'accuracy': 0.41630421865715983} | {'recall': 0.41630421865715983} | {'recall': 0.3213896867494273} |
0.1102 | 2.0 | 10572 | 0.1106 | 0.9619 | {'f1': 0.3982045872720165} | {'f1': 0.31538372135978} | {'accuracy': 0.4170647653000594} | {'recall': 0.4170647653000594} | {'recall': 0.32808068442725} |
0.1052 | 3.0 | 15858 | 0.1107 | 0.9619 | {'f1': 0.3980887152485667} | {'f1': 0.3181332487636058} | {'accuracy': 0.4168983957219251} | {'recall': 0.4168983957219251} | {'recall': 0.32790458379700677} |
0.1008 | 4.0 | 21144 | 0.1117 | 0.9616 | {'f1': 0.39966069827702533} | {'f1': 0.32238147844285014} | {'accuracy': 0.4169459298871064} | {'recall': 0.4169459298871064} | {'recall': 0.3280426755048108} |
0.0968 | 5.0 | 26430 | 0.1138 | 0.9609 | {'f1': 0.39833587917024693} | {'f1': 0.32459673497912495} | {'accuracy': 0.4110516934046346} | {'recall': 0.4110516934046346} | {'recall': 0.3347612567297387} |
0.0934 | 6.0 | 31716 | 0.1158 | 0.9604 | {'f1': 0.3893454681480969} | {'f1': 0.3185494353334119} | {'accuracy': 0.39786096256684494} | {'recall': 0.39786096256684494} | {'recall': 0.3332642189610775} |
0.0902 | 7.0 | 37002 | 0.1188 | 0.9596 | {'f1': 0.3843621716402447} | {'f1': 0.3127316237002698} | {'accuracy': 0.39332144979203804} | {'recall': 0.39332144979203804} | {'recall': 0.32900959748461356} |
Framework versions
- Transformers 4.35.1
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1