Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory.
Ranked #2 on
Music Modeling
on Nottingham
LANGUAGE MODELLING MACHINE TRANSLATION MUSIC MODELING SEQUENTIAL IMAGE CLASSIFICATION
In contrast with this general approach, this paper shows that Transformers can do even better for music modeling, when we improve the way a musical score is converted into the data fed to a Transformer model.
Recurrent Neural Networks have long been the dominating choice for sequence modeling.
Ranked #1 on
Music Modeling
on Nottingham
LANGUAGE MODELLING MUSIC MODELING SEQUENTIAL IMAGE CLASSIFICATION
In this paper we compare different types of recurrent units in recurrent neural networks (RNNs).
How can we efficiently propagate uncertainty in a latent state representation with recurrent neural networks?
Several variants of the Long Short-Term Memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995.
Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales.
This is impractical for long sequences such as musical compositions since their memory complexity for intermediate relative information is quadratic in the sequence length.
Ranked #2 on
Music Modeling
on JSB Chorales
We show that training a neural network to predict a seemingly more complex sequence, with extra features included in the series being modelled, can improve overall model performance significantly.
Ranked #1 on
Music Modeling
on JSB Chorales
In this paper, we propose a new Recurrent Neural Network (RNN) architecture.