Search Results for author: Haoxuan Liu

Found 2 papers, 0 papers with code

MetaBGM: Dynamic Soundtrack Transformation For Continuous Multi-Scene Experiences With Ambient Awareness And Personalization

no code implementations5 Sep 2024 Haoxuan Liu, ZiHao Wang, HaoRong Hong, Youwei Feng, Jiaxin Yu, Han Diao, Yunfei Xu, Kejun Zhang

This paper introduces MetaBGM, a groundbreaking framework for generating background music that adapts to dynamic scenes and real-time user interactions.

Audio Generation

MuDiT & MuSiT: Alignment with Colloquial Expression in Description-to-Song Generation

no code implementations3 Jul 2024 ZiHao Wang, Haoxuan Liu, Jiaxing Yu, Tao Zhang, Yan Liu, Kejun Zhang

This task is aimed at bridging the gap between colloquial language understanding and auditory expression within an AI model, with the ultimate goal of creating songs that accurately satisfy human auditory expectations and structurally align with musical norms.

Descriptive Rhythm

Cannot find the paper you are looking for? You can Submit a new open access paper.