( Image credit: R-Transformer )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory.
In this paper we compare different types of recurrent units in recurrent neural networks (RNNs).
Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales.
Based on this, we introduce a method for descriptor-based synthesis and show that we can control the descriptors of an instrument while keeping its timbre structure.
We show that training a neural network to predict a seemingly more complex sequence, with extra features included in the series being modeled, can improve overall model performance significantly.
SOTA for Music Modeling on JSB Chorales