Edit model card

Distil-wav2vec2

This model is a distilled version of the wav2vec2 model (https://arxiv.org/pdf/2006.11477.pdf). This model is 45% times smaller and twice as fast as the original wav2vec2 base model.

Evaluation results

This model achieves the following results (speed is mesured for a batch size of 64):

Model Size WER Librispeech-test-clean WER Librispeech-test-other Speed on cpu speed on gpu
Distil-wav2vec2 197.9 Mb 0.0983 0.2266 0.4006s 0.0046s
wav2vec2-base 360 Mb 0.0389 0.1047 0.4919s 0.0082s

Usage

notebook (executes seamlessly on google colab) at https://github.com/OthmaneJ/distil-wav2vec2

Downloads last month
50
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train OthmaneJ/distil-wav2vec2

Spaces using OthmaneJ/distil-wav2vec2 2