Browse > Music > Music Information Retrieval

Music Information Retrieval

9 papers with code · Music

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

CREPE: A Convolutional Representation for Pitch Estimation

17 Feb 2018marl/crepe

The task of estimating the fundamental frequency of a monophonic sound recording, also known as pitch tracking, is fundamental to audio processing with multiple applications in speech processing and music information retrieval. To date, the best performing techniques, such as the pYIN algorithm, are based on a combination of DSP pipelines and heuristics.

MUSIC INFORMATION RETRIEVAL

A Tutorial on Deep Learning for Music Information Retrieval

13 Sep 2017keunwoochoi/dl4mir

Following their success in Computer Vision and other areas, deep learning techniques have recently become widely adopted in Music Information Retrieval (MIR) research. However, the majority of works aim to adopt and assess methods that have been shown to be effective in other domains, while there is still a great need for more original research focusing on music primarily and utilising musical knowledge and insight.

MUSIC INFORMATION RETRIEVAL

Optical Music Recognition with Convolutional Sequence-to-Sequence Models

16 Jul 2017apacha/OMR-Datasets

Optical Music Recognition (OMR) is an important technology within Music Information Retrieval. This data set is the first publicly available set in OMR research with sufficient size to train and evaluate deep learning models.

MUSIC INFORMATION RETRIEVAL

Music Artist Classification with Convolutional Recurrent Neural Networks

14 Jan 2019ZainNasrullah/music-artist-classification-crnn

Previous attempts at music artist classification use frame-level audio features which summarize frequency content within short intervals of time. To this end, an established classification architecture, a Convolutional Recurrent Neural Network (CRNN), is applied to the artist20 music artist identification dataset under a comprehensive set of conditions.

MUSIC INFORMATION RETRIEVAL

Singing Voice Separation Using a Deep Convolutional Neural Network Trained by Ideal Binary Mask and Cross Entropy

4 Dec 2018EdwardLin2014/CNN-with-IBM-for-Singing-Voice-Separation

We present a unique neural network approach inspired by a technique that has revolutionized the field of vision: pixel-wise image classification, which we combine with cross entropy loss and pretraining of the CNN as an autoencoder on singing voice spectrograms. The IBM identifies the dominant sound source in each T-F bin of the magnitude spectrogram of a mixture signal, by considering each T-F bin as a pixel with a multi-label (for each sound source).

IMAGE CLASSIFICATION MUSIC INFORMATION RETRIEVAL

A Deep Bag-of-Features Model for Music Auto-Tagging

20 Aug 2015juhannam/deepbof

Feature learning and deep learning have drawn great attention in recent years as a way of transforming input data into more effective representations using learning algorithms. Such interest has grown in the area of music information retrieval (MIR) as well, particularly in music audio classification tasks such as auto-tagging.

AUDIO CLASSIFICATION MUSIC AUTO-TAGGING MUSIC INFORMATION RETRIEVAL

One Deep Music Representation to Rule Them All? : A comparative analysis of different representation learning strategies

12 Feb 2018eldrin/MTLMusicRepresentation-PyTorch

The underlying hypothesis is that if the initial and new learning tasks show commonalities and are applied to the same type of input data (e.g. music audio), the generated deep representation of the data is also informative for the new task. In this paper, we present the results of our investigation of what are the most important factors to generate deep representations for the data and learning tasks in the music domain.

MUSIC INFORMATION RETRIEVAL REPRESENTATION LEARNING TRANSFER LEARNING

Deep convolutional neural networks for predominant instrument recognition in polyphonic music

31 May 2016iooops/CS221-Audio-Tagging

In this paper, we present a convolutional neural network framework for predominant instrument recognition in real-world polyphonic music. We train our network from fixed-length music excerpts with a single-labeled predominant instrument and estimate an arbitrary number of predominant instruments from an audio signal with a variable length.

MUSIC INFORMATION RETRIEVAL