Music Generation

129 papers with code • 0 benchmarks • 24 datasets

Music Generation is the task of generating music or music-like sounds from a model or algorithm. The goal is to produce a sequence of notes or sound events that are similar to existing music in some way, such as having the same style, genre, or mood.

Libraries

Use these libraries to find Music Generation models and implementations

Video2Music: Suitable Music Generation from Videos using an Affective Multimodal Transformer model

amaai-lab/video2music 2 Nov 2023

These distinct features are then employed as guiding input to our music generation model.

121
02 Nov 2023

JEN-1 Composer: A Unified Framework for High-Fidelity Multi-Track Music Generation

0417keito/JEN-1-COMPOSER-pytorch 29 Oct 2023

With rapid advances in generative artificial intelligence, the text-to-music synthesis task has emerged as a promising direction for music generation from scratch.

24
29 Oct 2023

miditok: A Python package for MIDI file tokenization

Natooz/MidiTok 26 Oct 2023

Recent progress in natural language processing has been adapted to the symbolic music modality.

579
26 Oct 2023

Content-based Controls For Music Large Language Modeling

kikyo-16/coco-mulla-repo 26 Oct 2023

We aim to further equip the models with direct and content-based controls on innate music languages such as pitch, chords and drum track.

19
26 Oct 2023

Unsupervised Lead Sheet Generation via Semantic Compression

zacharynovack/lead-ae 16 Oct 2023

Lead sheets have become commonplace in generative music research, being used as an initial compressed representation for downstream tasks like multitrack music generation and automatic arrangement.

4
16 Oct 2023

CoCoFormer: A controllable feature-rich polyphonic music generation method

zjy0401/cocoformer 15 Oct 2023

This paper explores the modeling method of polyphonic music sequence.

6
15 Oct 2023

Impact of time and note duration tokenizations on deep learning symbolic music modeling

Natooz/music-modeling-time-duration 12 Oct 2023

Symbolic music is widely used in various deep learning tasks, including generation, transcription, synthesis, and Music Information Retrieval (MIR).

9
12 Oct 2023

Investigating Personalization Methods in Text to Music Generation

zelaki/DreamSound 20 Sep 2023

In this work, we investigate the personalization of text-to-music diffusion models in a few-shot setting.

24
20 Sep 2023

Exploring XAI for the Arts: Explaining Latent Space in Generative Music

bbanar2/exploring_xai_in_genmus_via_lsr 10 Aug 2023

We increase the explainability of the model by: i) using latent space regularisation to force some specific dimensions of the latent space to map to meaningful musical attributes, ii) providing a user interface feedback loop to allow people to adjust dimensions of the latent space and observe the results of these changes in real-time, iii) providing a visualisation of the musical attributes in the latent space to help people understand and predict the effect of changes to latent space dimensions.

9
10 Aug 2023

JEN-1: Text-Guided Universal Music Generation with Omnidirectional Diffusion Models

0417keito/JEN-1-pytorch 9 Aug 2023

Despite the task's significance, prevailing generative models exhibit limitations in music quality, computational efficiency, and generalization.

40
09 Aug 2023