Music Generation

129 papers with code • 0 benchmarks • 24 datasets

Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.

Libraries

Use these libraries to find Music Generation models and implementations

Latest papers with no code

MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models

no code yet • 9 Feb 2024

This paper introduces a novel approach to the editing of music generated by such models, enabling the modification of specific attributes, such as genre, mood and instrument, while maintaining other aspects unchanged.

MusicRL: Aligning Music Generation to Human Preferences

no code yet • 6 Feb 2024

MusicRL is a pretrained autoregressive MusicLM (Agostinelli et al., 2023) model of discrete audio tokens finetuned with reinforcement learning to maximise sequence-level rewards.

DITTO: Diffusion Inference-Time T-Optimization for Music Generation

no code yet • 22 Jan 2024

We propose Diffusion Inference-Time T-Optimization (DITTO), a general-purpose frame-work for controlling pre-trained text-to-music diffusion models at inference-time via optimizing initial noise latents.

Multi-view MidiVAE: Fusing Track- and Bar-view Representations for Long Multi-track Symbolic Music Generation

no code yet • 15 Jan 2024

Variational Autoencoders (VAEs) constitute a crucial component of neural symbolic music generation, among which some works have yielded outstanding results and attracted considerable attention.

MCMChaos: Improvising Rap Music with MCMC Methods and Chaos Theory

no code yet • 15 Jan 2024

In each version, values simulated from each respective mathematical model alter the rate of speech, volume, and (in the multiple voice case) the voice of the text-to-speech engine on a line-by-line basis.

StemGen: A music generation model that listens

no code yet • 14 Dec 2023

End-to-end generation of musical audio using deep learning techniques has seen an explosion of activity recently.

Computational Copyright: Towards A Royalty Model for Music Generative AI

no code yet • 11 Dec 2023

Our methodology involves a detailed analysis of existing royalty models in platforms like Spotify and YouTube, and adapting these to the unique context of AI-generated music.

Automatic Time Signature Determination for New Scores Using Lyrics for Latent Rhythmic Structure

no code yet • 27 Nov 2023

In this paper, we propose a novel approach that only uses lyrics as input to automatically generate a fitting time signature for lyrical songs and uncover the latent rhythmic structure utilizing explainable machine learning models.

Equipping Pretrained Unconditional Music Transformers with Instrument and Genre Controls

no code yet • 21 Nov 2023

We then propose a simple technique to equip this pretrained unconditional music transformer model with instrument and genre controls by finetuning the model with additional control tokens.

Are Words Enough? On the semantic conditioning of affective music generation

no code yet • 7 Nov 2023

In detail, we review two main paradigms adopted in automatic music generation: rules-based and machine-learning models.