repo_name / README.md
Jpep26's picture
Upload processor
a0a54ca verified
metadata
base_model: openai/whisper-base
datasets:
  - Jpep26/AfterProcessing
language:
  - ko
library_name: transformers
license: apache-2.0
metrics:
  - wer
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
model-index:
  - name: Test
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: AfterProcessing
          type: Jpep26/AfterProcessing
          args: 'config: ko, split: valid'
        metrics:
          - type: wer
            value: 0.7054380664652568
            name: Wer

Test

This model is a fine-tuned version of openai/whisper-base on the AfterProcessing dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9167
  • Cer: 0.8486
  • Wer: 0.7054
  • Mean: 0.7770

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 200
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer Wer Mean
2.9457 0.6410 50 2.6739 0.5791 0.8056 0.6923
1.7821 1.2821 100 1.6827 0.4622 0.6561 0.5592
1.2153 1.9231 150 1.1411 0.4216 0.6022 0.5119
0.8636 2.5641 200 0.9167 0.8486 0.7054 0.7770

Framework versions

  • Transformers 4.45.0.dev0
  • Pytorch 2.1.2+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1