Search Results for author: Yusong Wu

Found 11 papers, 5 papers with code

ReaLJam: Real-Time Human-AI Music Jamming with Reinforcement Learning-Tuned Transformers

no code implementations28 Feb 2025 Alexander Scarlatos, Yusong Wu, Ian Simon, Adam Roberts, Tim Cooijmans, Natasha Jaques, Cassie Tarakajian, Cheng-Zhi Anna Huang

We enable real-time interactions using the concept of anticipation, where the agent continually predicts how the performance will unfold and visually conveys its plan to the user.

AI Agent

The Song Describer Dataset: a Corpus of Audio Captions for Music-and-Language Evaluation

1 code implementation16 Nov 2023 Ilaria Manco, Benno Weck, Seungheon Doh, Minz Won, Yixiao Zhang, Dmitry Bogdanov, Yusong Wu, Ke Chen, Philip Tovstogan, Emmanouil Benetos, Elio Quinton, György Fazekas, Juhan Nam

We introduce the Song Describer dataset (SDD), a new crowdsourced corpus of high-quality audio-caption pairs, designed for the evaluation of music-and-language models.

Music Captioning Music Generation +3

MusicLDM: Enhancing Novelty in Text-to-Music Generation Using Beat-Synchronous Mixup Strategies

1 code implementation3 Aug 2023 Ke Chen, Yusong Wu, Haohe Liu, Marianna Nezhurina, Taylor Berg-Kirkpatrick, Shlomo Dubnov

Diffusion models have shown promising results in cross-modal generation tasks, including text-to-image and text-to-audio generation.

Audio Generation Beat Tracking +3

Peking Opera Synthesis via Duration Informed Attention Network

no code implementations7 Aug 2020 Yusong Wu, Shengchen Li, Chengzhu Yu, Heng Lu, Chao Weng, Liqiang Zhang, Dong Yu

In this work, we propose to deal with this issue and synthesize expressive Peking Opera singing from the music score based on the Duration Informed Attention Network (DurIAN) framework.

Rhythm Singing Voice Synthesis

Learning Singing From Speech

no code implementations20 Dec 2019 Liqiang Zhang, Chengzhu Yu, Heng Lu, Chao Weng, Yusong Wu, Xiang Xie, Zijin Li, Dong Yu

The proposed algorithm first integrate speech and singing synthesis into a unified framework, and learns universal speaker embeddings that are shareable between speech and singing synthesis tasks.

Speech Synthesis Voice Conversion

Cannot find the paper you are looking for? You can Submit a new open access paper.