Generating Diverse and Natural 3D Human Motions From Text

Automated generation of 3D human motions from text is a challenging problem. The generated motions are expected to be sufficiently diverse to explore the text-grounded motion space, and more importantly, accurately depicting the content in prescribed text descriptions. Here we tackle this problem with a two-stage approach: text2length sampling and text2motion generation. Text2length involves sampling from the learned distribution function of motion lengths conditioned on the input text. This is followed by our text2motion module using temporal variational autoencoder to synthesize a diverse set of human motions of the sampled lengths. Instead of directly engaging with pose sequences, we propose motion snippet code as our internal motion representation, which captures local semantic motion contexts and is empirically shown to facilitate the generation of plausible motions faithful to the input text. Moreover, a large-scale dataset of scripted 3D Human motions, HumanML3D, is constructed, consisting of 14,616 motion clips and 44,970 text descriptions. Extensive empirical experiments demonstrate the effectiveness of our approach. Project webpage: https://ericguo5513.github.io/text-to-motion/.

PDF Abstract

Datasets


Introduced in the Paper:

HumanML3D

Used in the Paper:

KIT Motion-Language InterHuman

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Motion Synthesis HumanML3D T2M FID 1.087 # 21
Diversity 9.175 # 19
Multimodality 2.219 # 9
R Precision Top3 0.736 # 18
Motion Synthesis InterHuman T2M FID 13.769 # 6
R-Precision Top3 0.464 # 5
MMDist 5.731 # 4
MModality 1.387 # 4
Motion Synthesis KIT Motion-Language T2M FID 3.022 # 18
R Precision Top3 0.681 # 18
Diversity 10.72 # 17
Multimodality 2.052 # 8

Methods