Music Information Retrieval
94 papers with code • 0 benchmarks • 23 datasets
Benchmarks
These leaderboards are used to track progress in Music Information Retrieval
Datasets
Latest papers
All-In-One Metrical And Functional Structure Analysis With Neighborhood Attentions on Demixed Audio
Music is characterized by complex hierarchical structures.
Transfer Learning and Bias Correction with Pre-trained Audio Embeddings
This approach allows representations derived for one task to be applied to another, and can result in high accuracy with less stringent training data requirements for the downstream task.
Audio Embeddings as Teachers for Music Classification
Music classification has been one of the most popular tasks in the field of music information retrieval.
MARBLE: Music Audio Representation Benchmark for Universal Evaluation
This is evident in the limited work on deep music representations, the scarcity of large-scale datasets, and the absence of a universal and community-driven benchmark.
SANGEET: A XML based Open Dataset for Research in Hindustani Sangeet
The dataset is intended to provide the ground truth information for music information research tasks, thereby supporting several data-driven analysis from a machine learning perspective.
LooPy: A Research-Friendly Mix Framework for Music Information Retrieval on Electronic Dance Music
Music information retrieval (MIR) has gone through an explosive development with the advancement of deep learning in recent years.
CLaMP: Contrastive Language-Music Pre-training for Cross-Modal Symbolic Music Information Retrieval
We introduce CLaMP: Contrastive Language-Music Pre-training, which learns cross-modal representations between natural language and symbolic music using a music encoder and a text encoder trained jointly with a contrastive loss.
AIRCADE: an Anechoic and IR Convolution-based Auralization Data-compilation Ensemble
In this paper, we introduce a data-compilation ensemble, primarily intended to serve as a resource for researchers in the field of dereverberation, particularly for data-driven approaches.
Tempo vs. Pitch: understanding self-supervised tempo estimation
Self-supervision methods learn representations by solving pretext tasks that do not require human-generated labels, alleviating the need for time-consuming annotations.
Symbolic Music Structure Analysis with Graph Representations and Changepoint Detection Methods
In the past, there have been several works that attempt to segment music into the audio and symbolic domains, however, the identification and segmentation of the music structure at different levels is still an open research problem in this area.