117 papers with code • 0 benchmarks • 23 datasets
Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.
These leaderboards are used to track progress in Music Generation
MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment
The three models, which differ in the underlying assumptions and accordingly the network architectures, are referred to as the jamming model, the composer model and the hybrid model.
Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps.
We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google's MelodyRNN models, each time using the same priming melody.
In this paper, we present a conceptually different approach that explicitly takes into account the type of the tokens, such as note types and metric types.
Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes.
Experimental results show that using binary neurons instead of HT or BS indeed leads to better results in a number of objective measures.
We propose the Multi-Track Music Machine (MMM), a generative system based on the Transformer architecture that is capable of generating multi-track music.