Music Generation

127 papers with code • 0 benchmarks • 24 datasets

Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.

Libraries

Use these libraries to find Music Generation models and implementations

Most implemented papers

Music Transformer

Natooz/MidiTok ICLR 2019

This is impractical for long sequences such as musical compositions since their memory complexity for intermediate relative information is quadratic in the sequence length.

MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment

salu133445/musegan 19 Sep 2017

The three models, which differ in the underlying assumptions and accordingly the network architectures, are referred to as the jamming model, the composer model and the hybrid model.

It's Raw! Audio Generation with State-Space Models

hazyresearch/state-spaces 20 Feb 2022

SaShiMi yields state-of-the-art performance for unconditional waveform generation in the autoregressive setting.

This Time with Feeling: Learning Expressive Musical Performance

Natooz/MidiTok 10 Aug 2018

Music generation has generally been focused on either creating scores or interpreting them.

MelNet: A Generative Model for Audio in the Frequency Domain

fatchord/MelNet 4 Jun 2019

Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps.

MidiNet: A Convolutional Generative Adversarial Network for Symbolic-domain Music Generation

RichardYang40148/MidiNet 31 Mar 2017

We conduct a user study to compare the melody of eight-bar long generated by MidiNet and by Google's MelodyRNN models, each time using the same priming melody.

Enabling Factorized Piano Music Modeling and Generation with the MAESTRO Dataset

BShakhovsky/PolyphonicPianoTranscription ICLR 2019

Generating musical audio directly with neural networks is notoriously difficult because it requires coherently modeling structure at many different timescales.

Counterpoint by Convolution

czhuang/coconet 18 Mar 2019

Machine learning models of music typically break up the task of composition into a chronological process, composing a piece of music in a single pass from beginning to end.

Compound Word Transformer: Learning to Compose Full-Song Music over Dynamic Directed Hypergraphs

YatingMusic/compound-word-transformer 7 Jan 2021

In this paper, we present a conceptually different approach that explicitly takes into account the type of the tokens, such as note types and metric types.

A Critical Review of Recurrent Neural Networks for Sequence Learning

junwang23/deepdirtycodes 29 May 2015

Recurrent neural networks (RNNs) are connectionist models that capture the dynamics of sequences via cycles in the network of nodes.