Music Generation
129 papers with code • 0 benchmarks • 24 datasets
Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.
Benchmarks
These leaderboards are used to track progress in Music Generation
Libraries
Use these libraries to find Music Generation models and implementationsDatasets
Latest papers with no code
MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models
This paper introduces a novel approach to the editing of music generated by such models, enabling the modification of specific attributes, such as genre, mood and instrument, while maintaining other aspects unchanged.
MusicRL: Aligning Music Generation to Human Preferences
MusicRL is a pretrained autoregressive MusicLM (Agostinelli et al., 2023) model of discrete audio tokens finetuned with reinforcement learning to maximise sequence-level rewards.
DITTO: Diffusion Inference-Time T-Optimization for Music Generation
We propose Diffusion Inference-Time T-Optimization (DITTO), a general-purpose frame-work for controlling pre-trained text-to-music diffusion models at inference-time via optimizing initial noise latents.
Multi-view MidiVAE: Fusing Track- and Bar-view Representations for Long Multi-track Symbolic Music Generation
Variational Autoencoders (VAEs) constitute a crucial component of neural symbolic music generation, among which some works have yielded outstanding results and attracted considerable attention.
MCMChaos: Improvising Rap Music with MCMC Methods and Chaos Theory
In each version, values simulated from each respective mathematical model alter the rate of speech, volume, and (in the multiple voice case) the voice of the text-to-speech engine on a line-by-line basis.
StemGen: A music generation model that listens
End-to-end generation of musical audio using deep learning techniques has seen an explosion of activity recently.
Computational Copyright: Towards A Royalty Model for Music Generative AI
Our methodology involves a detailed analysis of existing royalty models in platforms like Spotify and YouTube, and adapting these to the unique context of AI-generated music.
Automatic Time Signature Determination for New Scores Using Lyrics for Latent Rhythmic Structure
In this paper, we propose a novel approach that only uses lyrics as input to automatically generate a fitting time signature for lyrical songs and uncover the latent rhythmic structure utilizing explainable machine learning models.
Equipping Pretrained Unconditional Music Transformers with Instrument and Genre Controls
We then propose a simple technique to equip this pretrained unconditional music transformer model with instrument and genre controls by finetuning the model with additional control tokens.
Are Words Enough? On the semantic conditioning of affective music generation
In detail, we review two main paradigms adopted in automatic music generation: rules-based and machine-learning models.