Music2Dance: DanceNet for Music-driven Dance Generation

2 Feb 2020  ·  Wenlin Zhuang, Congyi Wang, Siyu Xia, Jinxiang Chai, Yangang Wang ·

Synthesize human motions from music, i.e., music to dance, is appealing and attracts lots of research interests in recent years. It is challenging due to not only the requirement of realistic and complex human motions for dance, but more importantly, the synthesized motions should be consistent with the style, rhythm and melody of the music. In this paper, we propose a novel autoregressive generative model, DanceNet, to take the style, rhythm and melody of music as the control signals to generate 3D dance motions with high realism and diversity. To boost the performance of our proposed model, we capture several synchronized music-dance pairs by professional dancers, and build a high-quality music-dance pair dataset. Experiments have demonstrated that the proposed method can achieve the state-of-the-art results.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Motion Synthesis AIST++ DanceNet Beat alignment score 0.143 # 8
FID 69.13 # 5

Methods


No methods listed for this paper. Add relevant methods here