Music Generation

131 papers with code • 0 benchmarks • 24 datasets

Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.

Libraries

Use these libraries to find Music Generation models and implementations

Exploring XAI for the Arts: Explaining Latent Space in Generative Music

bbanar2/exploring_xai_in_genmus_via_lsr 10 Aug 2023

We increase the explainability of the model by: i) using latent space regularisation to force some specific dimensions of the latent space to map to meaningful musical attributes, ii) providing a user interface feedback loop to allow people to adjust dimensions of the latent space and observe the results of these changes in real-time, iii) providing a visualisation of the musical attributes in the latent space to help people understand and predict the effect of changes to latent space dimensions.

9
10 Aug 2023

JEN-1: Text-Guided Universal Music Generation with Omnidirectional Diffusion Models

0417keito/JEN-1-pytorch 9 Aug 2023

Despite the task's significance, prevailing generative models exhibit limitations in music quality, computational efficiency, and generalization.

41
09 Aug 2023

MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies

retrocirce/musicldm 3 Aug 2023

Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation.

127
03 Aug 2023

Graph-based Polyphonic Multitrack Music Generation

emanuelecosenza/polyphemus 27 Jul 2023

Graphs can be leveraged to model polyphonic multitrack symbolic music, where notes, chords and entire sections may be linked at different levels of the musical hierarchy by tonal and rhythmic relationships.

15
27 Jul 2023

Polyffusion: A Diffusion Model for Polyphonic Score Generation with Internal and External Controls

aik2mlj/polyffusion 19 Jul 2023

We propose Polyffusion, a diffusion model that generates polyphonic music scores by regarding music as image-like piano roll representations.

52
19 Jul 2023

VampNet: Music Generation via Masked Acoustic Token Modeling

hugofloresgarcia/vampnet 10 Jul 2023

We introduce VampNet, a masked acoustic token modeling approach to music synthesis, compression, inpainting, and variation.

256
10 Jul 2023

EmoGen: Eliminating Subjective Bias in Emotional Music Generation

microsoft/muzic 3 Jul 2023

In this paper, we propose EmoGen, an emotional music generation system that leverages a set of emotion-related music attributes as the bridge between emotion and music, and divides the generation into two stages: emotion-to-attribute mapping with supervised clustering, and attribute-to-music generation with self-supervised learning.

4,207
03 Jul 2023

Simple and Controllable Music Generation

facebookresearch/audiocraft NeurIPS 2023

We tackle the task of conditional music generation.

19,677
08 Jun 2023

MuseCoco: Generating Symbolic Music from Text

microsoft/muzic 31 May 2023

In contrast, symbolic music offers ease of editing, making it more accessible for users to manipulate specific musical elements.

4,207
31 May 2023

GETMusic: Generating Any Music Tracks with a Unified Representation and Diffusion Framework

microsoft/muzic 18 May 2023

Our proposed representation, coupled with the non-autoregressive generative model, empowers GETMusic to generate music with any arbitrary source-target track combinations.

4,207
18 May 2023