Text-to-Music Generation
9 papers with code • 2 benchmarks • 2 datasets
Libraries
Use these libraries to find Text-to-Music Generation models and implementationsMost implemented papers
MusicLM: Generating Music From Text
We introduce MusicLM, a model generating high-fidelity music from text descriptions such as "a calming violin melody backed by a distorted guitar riff".
Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task
Benefiting from large-scale datasets and pre-trained models, the field of generative models has recently gained significant momentum.
Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion
Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another "language" of communication -- music.
MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies
Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation.
AudioLDM 2: Learning Holistic Audio Generation with Self-supervised Pretraining
Any audio can be translated into LOA based on AudioMAE, a self-supervised pre-trained representation learning model.
Music Understanding LLaMA: Advancing Text-to-Music Generation with Question Answering and Captioning
To fill this gap, we present a methodology for generating question-answer pairs from existing audio captioning datasets and introduce the MusicQA Dataset designed for answering open-ended music-related questions.
Investigating Personalization Methods in Text to Music Generation
In this work, we investigate the personalization of text-to-music diffusion models in a few-shot setting.
Mustango: Toward Controllable Text-to-Music Generation
With recent advancements in text-to-audio and text-to-music based on latent diffusion models, the quality of generated content has been reaching new heights.
The Song Describer Dataset: a Corpus of Audio Captions for Music-and-Language Evaluation
We introduce the Song Describer dataset (SDD), a new crowdsourced corpus of high-quality audio-caption pairs, designed for the evaluation of music-and-language models.