Text-to-Music Generation

9 papers with code • 2 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?


Use these libraries to find Text-to-Music Generation models and implementations

Most implemented papers

MusicLM: Generating Music From Text

facebookresearch/audiocraft 26 Jan 2023

We introduce MusicLM, a model generating high-fidelity music from text descriptions such as "a calming violin melody backed by a distorted guitar riff".

Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task

sander-wood/text-to-music 21 Nov 2022

Benefiting from large-scale datasets and pre-trained models, the field of generative models has recently gained significant momentum.

Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion

archinetai/audio-diffusion-pytorch 27 Jan 2023

Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another "language" of communication -- music.

MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies

retrocirce/musicldm 3 Aug 2023

Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation.

AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining

haoheliu/AudioLDM2 10 Aug 2023

Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model.

Music Understanding LLaMA: Advancing Text-to-Music Generation with Question Answering and Captioning

crypto-code/mu-llama 22 Aug 2023

To fill this gap, we present a methodology for generating question-answer pairs from existing audio captioning datasets and introduce the MusicQA Dataset designed for answering open-ended music-related questions.

Investigating Personalization Methods in Text to Music Generation

zelaki/DreamSound 20 Sep 2023

In this work, we investigate the personalization of text-to-music diffusion models in a few-shot setting.

Mustango: Toward Controllable Text-to-Music Generation

amaai-lab/mustango 14 Nov 2023

With recent advancements in text-to-audio and text-to-music based on latent diffusion models, the quality of generated content has been reaching new heights.

The Song Describer Dataset: a Corpus of Audio Captions for Music-and-Language Evaluation

mulab-mir/song-describer-dataset 16 Nov 2023

We introduce the Song Describer dataset (SDD), a new crowdsourced corpus of high-quality audio-caption pairs, designed for the evaluation of music-and-language models.