Action-conditioned On-demand Motion Generation

17 Jul 2022  ยท  QIUJING LU, YiPeng Zhang, Mingjian Lu, Vwani Roychowdhury ยท

We propose a novel framework, On-Demand MOtion Generation (ODMO), for generating realistic and diverse long-term 3D human motion sequences conditioned only on action types with an additional capability of customization. ODMO shows improvements over SOTA approaches on all traditional motion evaluation metrics when evaluated on three public datasets (HumanAct12, UESTC, and MoCap). Furthermore, we provide both qualitative evaluations and quantitative metrics demonstrating several first-known customization capabilities afforded by our framework, including mode discovery, interpolation, and trajectory customization. These capabilities significantly widen the spectrum of potential applications of such motion generation models. The novel on-demand generative capabilities are enabled by innovations in both the encoder and decoder architectures: (i) Encoder: Utilizing contrastive learning in low-dimensional latent space to create a hierarchical embedding of motion sequences, where not only the codes of different action types form different groups, but within an action type, codes of similar inherent patterns (motion styles) cluster together, making them readily discoverable; (ii) Decoder: Using a hierarchical decoding strategy where the motion trajectory is reconstructed first and then used to reconstruct the whole motion sequence. Such an architecture enables effective trajectory control. Our code is released on the Github page: https://github.com/roychowdhuryresearch/ODMO

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Human action generation CMU Mocap ODMO Accuracy 93.51 # 1
FID 34 # 1
Diversity 6.56 # 1
Multimodality 2.49 # 1
Human action generation HumanAct12 ODMO Accuracy 97.81 # 1
FID 0.12 # 1
Diversity 0.705 # 1
Multimodality 2.57 # 1
Human action generation UESTC RGB-D ODMO Accuracy 93.67 # 1
FID 0.15 # 1
Test 0.17 # 1
Diversity 7.11 # 1

Methods