Evaluation of CNN-based Automatic Music Tagging Models

1 Jun 2020  Â·  Minz Won, Andres Ferraro, Dmitry Bogdanov, Xavier Serra ·

Recent advances in deep learning accelerated the development of content-based automatic music tagging systems. Music information retrieval (MIR) researchers proposed various architecture designs, mainly based on convolutional neural networks (CNNs), that achieve state-of-the-art results in this multi-label binary classification task. However, due to the differences in experimental setups followed by researchers, such as using different dataset splits and software versions for evaluation, it is difficult to compare the proposed architectures directly with each other. To facilitate further research, in this paper we conduct a consistent evaluation of different music tagging models on three datasets (MagnaTagATune, Million Song Dataset, and MTG-Jamendo) and provide reference results using common evaluation metrics (ROC-AUC and PR-AUC). Furthermore, all the models are evaluated with perturbed inputs to investigate the generalization capabilities concerning time stretch, pitch shift, dynamic range compression, and addition of white noise. For reproducibility, we provide the PyTorch implementations with the pre-trained models.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Music Auto-Tagging MagnaTagATune (clean) Short-chunk CNN + Res ROC-AUC 91.29 # 2
PR-AUC 46.14 # 1

Methods


No methods listed for this paper. Add relevant methods here