futureProofGlitch commited on
Commit
62bf4c3
1 Parent(s): e078745

End of training

Browse files
Files changed (1) hide show
  1. README.md +7 -11
README.md CHANGED
@@ -24,7 +24,7 @@ model-index:
24
  metrics:
25
  - name: Wer
26
  type: wer
27
- value: 46.300985978395836
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -34,9 +34,9 @@ should probably proofread and complete it, then remove this comment. -->
34
 
35
  This model is a fine-tuned version of [futureProofGlitch/whisper-small](https://huggingface.co/futureProofGlitch/whisper-small) on the Gigaspeech dataset.
36
  It achieves the following results on the evaluation set:
37
- - Loss: 0.3434
38
- - Wer Ortho: 56.8717
39
- - Wer: 46.3010
40
 
41
  ## Model description
42
 
@@ -62,19 +62,15 @@ The following hyperparameters were used during training:
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: constant_with_warmup
64
  - lr_scheduler_warmup_steps: 50
65
- - training_steps: 3000
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
72
- | 0.2268 | 0.5 | 500 | 0.3308 | 29.8287 | 18.2124 |
73
- | 0.2039 | 0.99 | 1000 | 0.3082 | 28.3139 | 16.3612 |
74
- | 0.1071 | 1.49 | 1500 | 0.3209 | 30.5425 | 18.9117 |
75
- | 0.1174 | 1.98 | 2000 | 0.3140 | 51.1370 | 40.1655 |
76
- | 0.0555 | 2.48 | 2500 | 0.3525 | 65.5197 | 53.9069 |
77
- | 0.0603 | 2.98 | 3000 | 0.3434 | 56.8717 | 46.3010 |
78
 
79
 
80
  ### Framework versions
 
24
  metrics:
25
  - name: Wer
26
  type: wer
27
+ value: 16.45244089773603
28
  ---
29
 
30
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
34
 
35
  This model is a fine-tuned version of [futureProofGlitch/whisper-small](https://huggingface.co/futureProofGlitch/whisper-small) on the Gigaspeech dataset.
36
  It achieves the following results on the evaluation set:
37
+ - Loss: 0.3078
38
+ - Wer Ortho: 28.4362
39
+ - Wer: 16.4524
40
 
41
  ## Model description
42
 
 
62
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
  - lr_scheduler_type: constant_with_warmup
64
  - lr_scheduler_warmup_steps: 50
65
+ - training_steps: 1000
66
  - mixed_precision_training: Native AMP
67
 
68
  ### Training results
69
 
70
  | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
71
  |:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
72
+ | 0.2267 | 0.5 | 500 | 0.3309 | 29.5720 | 18.0966 |
73
+ | 0.2035 | 0.99 | 1000 | 0.3078 | 28.4362 | 16.4524 |
 
 
 
 
74
 
75
 
76
  ### Framework versions