Edit model card

whisper-medium-anime-5k

This model is a fine-tuned version of openai/whisper-medium on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2012

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.3
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.3486 0.1082 200 1.9858
1.2438 0.2165 400 0.7836
0.3552 0.3247 600 0.2941
0.2846 0.4329 800 0.2758
0.2712 0.5411 1000 0.2623
0.2602 0.6494 1200 0.2537
0.253 0.7576 1400 0.2450
0.2397 0.8658 1600 0.2387
0.2364 0.9740 1800 0.2348
0.2238 1.0823 2000 0.2292
0.2147 1.1905 2200 0.2262
0.2129 1.2987 2400 0.2220
0.2123 1.4069 2600 0.2189
0.2118 1.5152 2800 0.2165
0.2067 1.6234 3000 0.2140
0.2077 1.7316 3200 0.2123
0.2012 1.8398 3400 0.2089
0.1987 1.9481 3600 0.2082
0.1951 2.0563 3800 0.2058
0.1836 2.1645 4000 0.2048
0.1841 2.2727 4200 0.2043
0.1812 2.3810 4400 0.2028
0.1762 2.4892 4600 0.2025
0.1836 2.5974 4800 0.2017
0.1777 2.7056 5000 0.2012

Framework versions

  • PEFT 0.10.0
  • Transformers 4.41.0.dev0
  • Pytorch 2.2.2+cu118
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
120
Inference API (serverless) does not yet support peft models for this pipeline type.

Adapter for

Dataset used to train sin2piusc/whisper-medium-anime-5k