test_4 / README.md
Ranjit's picture
End of training
4ea1e09
metadata
base_model: xxxxxxxxx
tags:
  - generated_from_trainer
datasets:
  - AmazonScience/massive
model-index:
  - name: massive_indo
    results: []

massive_indo

This model is a fine-tuned version of xxxxxxxxx on the massive dataset. It achieves the following results on the evaluation set:

  • Loss: 2.1952

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss
4.8949 2.08 100 4.8610
4.5401 4.17 200 4.5439
4.2447 6.25 300 4.2866
4.0005 8.33 400 4.0553
3.7874 10.42 500 3.8500
3.5807 12.5 600 3.6576
3.3725 14.58 700 3.4922
3.1977 16.67 800 3.3297
3.0234 18.75 900 3.1869
2.8863 20.83 1000 3.0530
2.7463 22.92 1100 2.9420
2.6025 25.0 1200 2.8200
2.4935 27.08 1300 2.7207
2.3695 29.17 1400 2.6279
2.2666 31.25 1500 2.5470
2.1584 33.33 1600 2.4736
2.0767 35.42 1700 2.4043
2.0374 37.5 1800 2.3516
1.9982 39.58 1900 2.3028
1.9241 41.67 2000 2.2679
1.8844 43.75 2100 2.2384
1.8488 45.83 2200 2.2143
1.8441 47.92 2300 2.1988
1.8368 50.0 2400 2.1952

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.0.1+cu118
  • Tokenizers 0.13.3