Edit model card

Example how to use with whisper.cpp

git clone https://github.com/ggerganov/whisper.cpp.git
cd whisper.cpp

git reset --hard 0b9af32a8b3fa7e2ae5f15a9a08f5b10394993f5
make
one of the following:
./main -m ggml-model-fi-tiny.bin -f INSERT_YOUR_FILENAME_HERE.wav -l fi
./main -m ggml-model-fi-medium.bin -f INSERT_YOUR_FILENAME_HERE.wav -l fi
./main -m ggml-model-fi-large.bin -f INSERT_YOUR_FILENAME_HERE.wav -l fi
./main -m ggml-model-fi-large-v3.bin -f INSERT_YOUR_FILENAME_HERE.wav -l fi
Sample output should look something like this:


(finetuneEnv) rasmus@DESKTOP-59O9VN1:/mnt/f/Omat_opiskelut/whisper_transformaatio/whisper.cpp$ ./main -m ggml-model-fi-medium.bin -f oma_nauhoitus_16khz.wav -l fi
whisper_init_from_file_with_params_no_state: loading model from 'ggml-model-fi-medium.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51865
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 1024
whisper_model_load: n_audio_head  = 16
whisper_model_load: n_audio_layer = 24
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 1024
whisper_model_load: n_text_head   = 16
whisper_model_load: n_text_layer  = 24
whisper_model_load: n_mels        = 80
whisper_model_load: ftype         = 1
whisper_model_load: qntvr         = 0
whisper_model_load: type          = 4 (medium)
whisper_model_load: adding 1608 extra tokens
whisper_model_load: n_langs       = 99
whisper_model_load:      CPU buffer size =  1533.52 MB
whisper_model_load: model size    = 1533.14 MB
whisper_init_state: kv self size  =  132.12 MB
whisper_init_state: kv cross size =  147.46 MB
whisper_init_state: compute buffer (conv)   =   25.61 MB
whisper_init_state: compute buffer (encode) =  170.28 MB
whisper_init_state: compute buffer (cross)  =    7.85 MB
whisper_init_state: compute buffer (decode) =   98.32 MB

system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0 | 

main: processing 'oma_nauhoitus_16khz.wav' (144160 samples, 9.0 sec), 4 threads, 1 processors, 5 beams + best of 5, lang = fi, task = transcribe, timestamps = 1 ...


[00:00:00.000 --> 00:00:09.000]  Moi, nimeni on Rasmus ja testaan tekoälymallia, joka tunnistaa puheeni ja kirjoittaa sen tekstiksi.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Space using Finnish-NLP/Finnish-finetuned-whisper-models-ggml-format 1

Collection including Finnish-NLP/Finnish-finetuned-whisper-models-ggml-format