Music Information Retrieval
94 papers with code • 0 benchmarks • 23 datasets
Benchmarks
These leaderboards are used to track progress in Music Information Retrieval
Datasets
Most implemented papers
audioLIME: Listenable Explanations Using Source Separation
Deep neural networks (DNNs) are successfully applied in a wide variety of music information retrieval (MIR) tasks but their predictions are usually not interpretable.
Tracing Back Music Emotion Predictions to Sound Sources and Intuitive Perceptual Qualities
In previous work, we have shown how to derive explanations of model predictions in terms of spectrogram image segments that connect to the high-level emotion prediction via a layer of easily interpretable perceptual features.
Sequence-to-Sequence Piano Transcription with Transformers
Automatic Music Transcription has seen significant progress in recent years by training custom deep neural networks on large datasets.
Learning Sparse Analytic Filters for Piano Transcription
In this work, several variations of a frontend filterbank learning module are investigated for piano transcription, a challenging low-level music information retrieval task.
Nonnegative Tucker Decomposition with Beta-divergence for Music Structure Analysis of Audio Signals
Nonnegative Tucker decomposition (NTD), a tensor decomposition model, has received increased interest in the recent years because of its ability to blindly extract meaningful patterns, in particular in Music Information Retrieval.
A Data-Driven Methodology for Considering Feasibility and Pairwise Likelihood in Deep Learning Based Guitar Tablature Transcription Systems
This naturally enforces playability constraints for guitar, and yields tablature which is more consistent with the symbolic data used to estimate pairwise likelihoods.
CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval
We introduce CLaMP: Contrastive Language-Music Pre-training, which learns cross-modal representations between natural language and symbolic music using a music encoder and a text encoder trained jointly with a contrastive loss.
A Deep Bag-of-Features Model for Music Auto-Tagging
Feature learning and deep learning have drawn great attention in recent years as a way of transforming input data into more effective representations using learning algorithms.
Automatic Instrument Recognition in Polyphonic Music Using Convolutional Neural Networks
Traditional methods to tackle many music information retrieval tasks typically follow a two-step architecture: feature engineering followed by a simple learning algorithm.
Deep convolutional neural networks for predominant instrument recognition in polyphonic music
We train our network from fixed-length music excerpts with a single-labeled predominant instrument and estimate an arbitrary number of predominant instruments from an audio signal with a variable length.