SPGNet: Spatial Projection Guided 3D Human Pose Estimation in Low Dimensional Space

4 Jun 2022  ·  Zihan Wang, Ruimin Chen, Mengxuan Liu, Guanfang Dong, Anup Basu ·

We propose a method SPGNet for 3D human pose estimation that mixes multi-dimensional re-projection into supervised learning. In this method, the 2D-to-3D-lifting network predicts the global position and coordinates of the 3D human pose. Then, we re-project the estimated 3D pose back to the 2D key points along with spatial adjustments. The loss functions compare the estimated 3D pose with the 3D pose ground truth, and re-projected 2D pose with the input 2D pose. In addition, we propose a kinematic constraint to restrict the predicted target with constant human bone length. Based on the estimation results for the dataset Human3.6M, our approach outperforms many state-of-the-art methods both qualitatively and quantitatively.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation Human3.6M SPGNet (GT) Average MPJPE (mm) 33.4 # 46
Using 2D ground-truth joints Yes # 2
Multi-View or Monocular Monocular # 1
3D Human Pose Estimation Human3.6M SPGNet Average MPJPE (mm) 45.3 # 112

Methods


No methods listed for this paper. Add relevant methods here