Edit model card

whisper-small-commonvoice-en

This model is a fine-tuned version of openai/whisper-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3573
  • Wer: 14.7768

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 3e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • training_steps: 1000

Training results

Training Loss Epoch Step Validation Loss Wer
0.2697 0.4545 50 0.2484 13.1027
0.3113 0.9091 100 0.2571 13.3482
0.0671 1.3636 150 0.2864 13.7946
0.0833 1.8182 200 0.2984 14.3973
0.0297 2.2727 250 0.3126 14.2411
0.0253 2.7273 300 0.3181 14.4866
0.016 3.1818 350 0.3273 14.8661
0.0138 3.6364 400 0.3177 13.9955
0.012 4.0909 450 0.3335 14.4866
0.0054 4.5455 500 0.3401 14.6429
0.004 5.0 550 0.3459 14.5982
0.0011 5.4545 600 0.3535 15.1339
0.0026 5.9091 650 0.3422 15.1116
0.001 6.3636 700 0.3452 14.5313
0.0007 6.8182 750 0.3510 14.6875
0.0005 7.2727 800 0.3540 14.6429
0.0005 7.7273 850 0.3554 14.7545
0.0005 8.1818 900 0.3564 14.7321
0.0005 8.6364 950 0.3570 14.7545
0.0004 9.0909 1000 0.3573 14.7768

Framework versions

  • Transformers 4.42.3
  • Pytorch 2.1.2
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
9
Safetensors
Model size
242M params
Tensor type
F32
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Ronysalem/whisper-small-commonvoice-en

Finetuned
this model