Music Transcription
44 papers with code • 6 benchmarks • 12 datasets
Music transcription is the task of converting an acoustic musical signal into some form of music notation.
( Image credit: ISMIR 2015 Tutorial - Automatic Music Transcription )
Libraries
Use these libraries to find Music Transcription models and implementationsDatasets
Most implemented papers
Deep Complex Networks
Despite their attractive properties and potential for opening up entirely new neural architectures, complex-valued deep neural networks have been marginalized due to the absence of the building blocks required to design such models.
Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset
Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales.
High-resolution Piano Transcription with Pedals by Regressing Onset and Offset Times
In addition, previous AMT systems are sensitive to the misaligned onset and offset labels of audio recordings.
MT3: Multi-Task Multitrack Music Transcription
Automatic Music Transcription (AMT), inferring musical notes from raw audio, is a challenging task at the core of music understanding.
Music transcription modelling and composition using deep learning
We apply deep learning methods, specifically long short-term memory (LSTM) networks, to music transcription modelling and composition.
Learning Features of Music from Scratch
This paper introduces a new large-scale music dataset, MusicNet, to serve as a source of supervision and evaluation of machine learning methods for music research.
Residual Shuffle-Exchange Networks for Fast Processing of Long Sequences
Attention is a commonly used mechanism in sequence processing, but it is of O(n^2) complexity which prevents its application to long sequences.
The Effect of Spectrogram Reconstruction on Automatic Music Transcription: An Alternative Approach to Improve Transcription Accuracy
We attempt to use only the pitch labels (together with spectrogram reconstruction loss) and explore how far this model can go without introducing supervised sub-tasks.
Sequence-to-Sequence Piano Transcription with Transformers
Automatic Music Transcription has seen significant progress in recent years by training custom deep neural networks on large datasets.
Scoring Time Intervals using Non-Hierarchical Transformer For Automatic Piano Transcription
The neural semi-Markov Conditional Random Field (semi-CRF) framework has demonstrated promise for event-based piano transcription.