Edit model card

these are experimental audio embedding models (The embedding size is 1280 :0)

nano -> 254K params -> 64 hidden size, 1 middle layer (hidden)
tiny -> 524K params -> 128 hidden size, 1 middle layer (hidden)
small -> 1.8M params -> 256 hidden size, 2 middle layers (hidden)
medium -> 4.7M params -> 512 hidden size, 2 middle layers (hidden)
large -> 14.4M params -> 512 hidden size, 4 middle layers (hidden)
xlarge -> 48.1M params -> 768 hidden size, 5 middle layers (hidden)

This embedding model was trained on the captioned-audio-1k dataset, and some other music from different sources. This isn't meant to be the most accurate model *ever* but it's a stepping stone!

code @https://github.com/muzaik-ai/embedding/

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train muzaik/embedding_model_v1