Music Information Retrieval
93 papers with code • 0 benchmarks • 23 datasets
Benchmarks
These leaderboards are used to track progress in Music Information Retrieval
Datasets
Latest papers with no code
Scaling Up Music Information Retrieval Training with Semi-Supervised Learning
To our knowledge, this is the first attempt to study the effects of scaling up both model and training data for a variety of MIR tasks.
WikiMT++ Dataset Card
WikiMT++ is an expanded and refined version of WikiMusicText (WikiMT), featuring 1010 curated lead sheets in ABC notation.
Performance Conditioning for Diffusion-Based Multi-Instrument Music Synthesis
Building on state-of-the-art diffusion-based music generative models, we introduce performance conditioning - a simple tool indicating the generative model to synthesize music with style and timbre of specific instruments taken from specific performances.
Towards Robust and Truly Large-Scale Audio-Sheet Music Retrieval
A range of applications of multi-modal music information retrieval is centred around the problem of connecting large collections of sheet music (images) to corresponding audio recordings, that is, identifying pairs of audio and score excerpts that refer to the same musical content.
DisCover: Disentangled Music Representation Learning for Cover Song Identification
We analyze the CSI task in a disentanglement view with the causal graph technique, and identify the intra-version and inter-version effects biasing the invariant learning.
JAZZVAR: A Dataset of Variations found within Solo Piano Performances of Jazz Standards for Music Overpainting
In this paper, we outline the curation process for obtaining and sorting the repertoire, the pipeline for creating the Original and Variation pairs, and our analysis of the dataset.
Real-time Percussive Technique Recognition and Embedding Learning for the Acoustic Guitar
We introduce a taxonomy of guitar body percussion based on hand part and location.
On the Effectiveness of Speech Self-supervised Learning for Music
Our findings suggest that training with music data can generally improve performance on MIR tasks, even when models are trained using paradigms designed for speech.
JEPOO: Highly Accurate Joint Estimation of Pitch, Onset and Offset for Music Information Retrieval
In this paper, we propose a highly accurate method for joint estimation of pitch, onset and offset, named JEPOO.
Transfer of knowledge among instruments in automatic music transcription
Achieved results prove that using synthesized data for training may be a good base for pretraining general-purpose models, where the task of transcription is not focused on one instrument.