DeepSSM: Deep State-Space Model for 3D Human Motion Prediction

25 May 2020  ·  Xiaoli Liu, Jianqin Yin, Huaping Liu, Jun Liu ·

Predicting future human motion plays a significant role in human-machine interactions for various real-life applications. A unified formulation and multi-order modeling are two critical perspectives for analyzing and representing human motion. In contrast to prior works, we improve the multi-order modeling ability of human motion systems for more accurate predictions by building a deep state-space model (DeepSSM). The DeepSSM utilizes the advantages of both the state-space theory and the deep network. Specifically, we formulate the human motion system as the state-space model of a dynamic system and model the motion system by the state-space theory, offering a unified formulation for diverse human motion systems. Moreover, a novel deep network is designed to parameterize this system, which jointly models the state-state transition and state-observation transition processes. In this way, the state of a system is updated by the multi-order information of a time-varying human motion sequence. Multiple future poses are recursively predicted via the state-observation transition. To further improve the model ability of the system, a novel loss, WT-MPJPE (Weighted Temporal Mean Per Joint Position Error), is introduced to optimize the model. The proposed loss encourages the system to achieve more accurate predictions by increasing weights to the early time steps. The experiments on two benchmark datasets (i.e., Human3.6M and 3DPW) confirm that our method achieves state-of-the-art performance with improved accuracy of at least 2.2mm per joint. The code will be available at: \url{https://github.com/lily2lab/DeepSSM.git}.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here