|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
Recurrent Neural Networks have long been the dominating choice for sequence modeling.
SOTA for Music Modeling on Nottingham
Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales.
Based on this, we introduce a method for descriptor-based synthesis and show that we can control the descriptors of an instrument while keeping its timbre structure.
Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory.
#2 best model for Sequential Image Classification on Sequential MNIST
Our goal is to be able to build a generative model from a deep neural network architecture to try to create music that has both harmony and melody and is passable as music composed by humans.
In this paper we compare different types of recurrent units in recurrent neural networks (RNNs).