Single-Shot Motion Completion with Transformer

1 Mar 2021  ·  Yinglin Duan, Tianyang Shi, Zhengxia Zou, Yenan Lin, Zhehui Qian, Bohan Zhang, Yi Yuan ·

Motion completion is a challenging and long-discussed problem, which is of great significance in film and game applications. For different motion completion scenarios (in-betweening, in-filling, and blending), most previous methods deal with the completion problems with case-by-case designs. In this work, we propose a simple but effective method to solve multiple motion completion problems under a unified framework and achieves a new state of the art accuracy under multiple evaluation settings. Inspired by the recent great success of attention-based models, we consider the completion as a sequence to sequence prediction problem. Our method consists of two modules - a standard transformer encoder with self-attention that learns long-range dependencies of input motions, and a trainable mixture embedding module that models temporal information and discriminates key-frames. Our method can run in a non-autoregressive manner and predict multiple missing frames within a single forward propagation in real time. We finally show the effectiveness of our method in music-dance applications.

PDF Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Motion Synthesis LaFAN1 SSMCT L2Q@5 0.14 # 2
L2Q@15 0.36 # 2
L2Q@30 0.61 # 2
L2P@5 0.22 # 2
L2P@15 0.56 # 2
NPSS@5 0.0016 # 2
NPSS@15 0.0234 # 2
NPSS@30 0.1222 # 2
L2P@30 1.1 # 2


No methods listed for this paper. Add relevant methods here