Intro

The music genre classification model is fine-tuned based on a pre-trained model from the computer vision (CV) domain, aiming to classify audio data into different genres. During the pre-training phase, the model learns rich feature representations using a large-scale dataset from computer vision tasks. Through transfer learning, these learned features are applied to the music genre classification task to enhance the model's performance on audio data. In the fine-tuning phase, an audio dataset containing 16 music genre categories is utilized. These audio samples are first transformed into spectrograms, converting the temporal audio signal into a two-dimensional representation in the time and frequency dimensions. The spectrogram representation captures the temporal evolution of different audio frequencies, providing the model with rich information about the audio content. Through fine-tuning, adjustments are made to the pre-trained model to meet the requirements of the music genre classification task. The model learns to extract features from spectrograms that are relevant to music genre, enabling accurate classification of audio samples. This process enables the model to recognize and infer music genres, such as rock, classical, pop, among others. By combining a pre-trained model from the computer vision domain with an audio task, this approach leverages cross-modal knowledge transfer, demonstrating the adaptability and effectiveness of pre-trained models across different domains.

Demo

https://huggingface.co/spaces/ccmusic-database/music-genre

Usage

from modelscope import snapshot_download
model_dir = snapshot_download('ccmusic-database/music_genre')

Maintenance

GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:ccmusic-database/music_genre
cd music_genre

Results

A demo result of VGG19_BN fine-tuning:
Loss curve
Training and validation accuracy
Confusion matrix

Dataset

https://huggingface.co/datasets/ccmusic-database/music_genre

Mirror

https://www.modelscope.cn/models/ccmusic-database/music_genre

Evaluation

https://github.com/monetjoe/ccmusic_eval

Cite

@dataset{zhaorui_liu_2021_5676893,
  author       = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han},
  title        = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research},
  month        = {mar},
  year         = {2024},
  publisher    = {HuggingFace},
  version      = {1.2},
  url          = {https://huggingface.co/ccmusic-database}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Dataset used to train ccmusic-database/music_genre

Space using ccmusic-database/music_genre 1