whisper-large-v2-anime
This model is a fine-tuned version of clu-ling/whisper-large-v2-japanese-5k-steps on joujiboi/japanese-anime-speech (https://huggingface.co/datasets/joujiboi/japanese-anime-speech) dataset.
Model description
Whisper large v2 finetuned with japanese common voice 11 dataset (clu-ling) which was then fine tuned with transcripts from joujiboys dataset. First of three. *next model 10k steps with anime dataset. Possibly add another dataset.
Intended uses & limitations
Intended for alternative forms of media where language and pitch often deviate from mainstream. (anime tv jav etc)
Training and evaluation data
joujiboi/japanese-anime-speech mozilla-foundation/common_voice_11_0
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 3000
Training results
Qualitative eval. Improved Japanese to English translation over whisper-large-v2 and whisper-large-v2-japanese-5k-steps.
Framework versions
- PEFT 0.8.2
- Transformers 4.30.2
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.13.3
- Downloads last month
- 54