84 papers with code • 0 benchmarks • 20 datasets
Music Generation is a task of automatically generating music.
These leaderboards are used to track progress in Music Generation
LibrariesUse these libraries to find Music Generation models and implementations
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
The three models, which differ in the underlying assumptions and accordingly the network architectures, are referred to as the jamming model, the composer model and the hybrid model.
Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps.
We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google's MelodyRNN models, each time using the same priming melody.
Experimental results show that using binary neurons instead of HT or BS indeed leads to better results in a number of objective measures.
In this paper, we present a conceptually different approach that explicitly takes into account the type of the tokens, such as note types and metric types.
Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes.