--- base_model: openai/whisper-large-v3-turbo datasets: - Kushtrim/common_voice_19_sq language: - sq library_name: transformers license: mit metrics: - wer tags: - generated_from_trainer model-index: - name: Whisper Large V3 Turbo SQ results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: Common Voice 19.0 type: Kushtrim/common_voice_19_sq args: 'config: sq, split: test' metrics: - type: wer value: 22.451899358658114 name: Wer --- # Whisper Large V3 Turbo SQ This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) on the Common Voice 19.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3040 - Wer: 22.4519 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.5578 | 0.3163 | 250 | 0.6037 | 43.1130 | | 0.4707 | 0.6325 | 500 | 0.5295 | 40.3256 | | 0.4221 | 0.9488 | 750 | 0.4559 | 35.2985 | | 0.2853 | 1.2650 | 1000 | 0.4205 | 33.3103 | | 0.2685 | 1.5813 | 1250 | 0.3798 | 30.7844 | | 0.256 | 1.8975 | 1500 | 0.3552 | 28.5890 | | 0.18 | 2.2138 | 1750 | 0.3480 | 27.5728 | | 0.2158 | 2.5300 | 2000 | 0.3349 | 27.2521 | | 0.1396 | 2.8463 | 2250 | 0.3182 | 24.2526 | | 0.1123 | 3.1626 | 2500 | 0.3175 | 23.5520 | | 0.124 | 3.4788 | 2750 | 0.3100 | 23.4090 | | 0.0908 | 3.7951 | 3000 | 0.3040 | 22.4519 | ### Framework versions - Transformers 4.45.2 - Pytorch 2.4.1+cu121 - Datasets 3.0.1 - Tokenizers 0.20.1