Music Generation

129 papers with code • 0 benchmarks • 24 datasets

Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.

Libraries

Use these libraries to find Music Generation models and implementations

Latest papers with no code

MuPT: A Generative Symbolic Music Pretrained Transformer

no code yet • 9 Apr 2024

In this paper, we explore the application of Large Language Models (LLMs) to the pre-training of music.

A Novel Bi-LSTM And Transformer Architecture For Generating Tabla Music

no code yet • 6 Apr 2024

In this technical paper, methods for generating classical Indian music, specifically tabla music, is proposed.

The NES Video-Music Database: A Dataset of Symbolic Video Game Music Paired with Gameplay Videos

no code yet • 5 Apr 2024

To address this research gap, we introduce a novel dataset named NES-VMDB, containing 98, 940 gameplay videos from 389 NES games, each paired with its original soundtrack in symbolic format (MIDI).

Motifs, Phrases, and Beyond: The Modelling of Structure in Symbolic Music Generation

no code yet • 12 Mar 2024

Modelling musical structure is vital yet challenging for artificial intelligence systems that generate symbolic music compositions.

ByteComposer: a Human-like Melody Composition Method based on Language Model Agent

no code yet • 24 Feb 2024

Large Language Models (LLM) have shown encouraging progress in multimodal understanding and generation tasks.

A Survey of Music Generation in the Context of Interaction

no code yet • 23 Feb 2024

In recent years, machine learning, and in particular generative adversarial neural networks (GANs) and attention-based neural networks (transformers), have been successfully used to compose and generate music, both melodies and polyphonic pieces.

Structure-informed Positional Encoding for Music Generation

no code yet • 20 Feb 2024

Music generated by deep learning methods often suffers from a lack of coherence and long-term organization.

An Order-Complexity Aesthetic Assessment Model for Aesthetic-aware Music Recommendation

no code yet • 13 Feb 2024

In order to improve the quality of AI music generation and further guide computer music production, synthesis, recommendation and other tasks, we use Birkhoff's aesthetic measure to design a aesthetic model, objectively measuring the aesthetic beauty of music, and form a recommendation list according to the aesthetic feeling of music.

MusicMagus: Zero-Shot Text-to-Music Editing via Diffusion Models

no code yet • 9 Feb 2024

This paper introduces a novel approach to the editing of music generated by such models, enabling the modification of specific attributes, such as genre, mood and instrument, while maintaining other aspects unchanged.

MusicRL: Aligning Music Generation to Human Preferences

no code yet • 6 Feb 2024

MusicRL is a pretrained autoregressive MusicLM (Agostinelli et al., 2023) model of discrete audio tokens finetuned with reinforcement learning to maximise sequence-level rewards.