In this study, we train deep neural networks to classify composer on a symbolic domain.
The purpose of speech dereverberation is to remove quality-degrading effects of a time-invariant impulse response filter from the signal.
Since the vocal component plays a crucial role in popular music, singing voice detection has been an active research topic in music information retrieval.
Following their success in Computer Vision and other areas, deep learning techniques have recently become widely adopted in Music Information Retrieval (MIR) research.
In this paper, we empirically investigate the effect of audio preprocessing on music tagging with deep neural networks.
The results highlight several important aspects of music tagging and neural networks.
In this paper, we present a transfer learning approach for music classification and regression tasks.
Deep convolutional neural networks (CNNs) have been actively adopted in the field of music information retrieval, e. g. genre classification, mood detection, and chord recognition.
We introduce a novel playlist generation algorithm that focuses on the quality of transitions using a recurrent neural network (RNN).