Monocular 3D Human Pose Estimation by Predicting Depth on Joints

ICCV 2017  ·  Bruce Xiaohan Nie, Ping Wei, Song-Chun Zhu ·

This paper aims at estimating full-body 3D human poses from monocular images of which the biggest challenge is the inherent ambiguity introduced by lifting the 2D pose into 3D space. We propose a novel framework focusing on reducing this ambiguity by predicting the depth of human joints based on 2D human joint locations and body part images. Our approach is built on a two-level hierarchy of Long Short-Term Memory (LSTM) Networks which can be trained end-to-end. The first level consists of two components: 1) a skeleton-LSTM which learns the depth information from global human skeleton features; 2) a patch-LSTM which utilizes the local image evidence around joint locations. The both networks have tree structure defined on the kinematic relation of human skeleton, thus the information at different joints is broadcast through the whole skeleton in a top-down fashion. The two networks are first pre-trained separately on different data sources and then aggregated in the second layer for final depth prediction. The empirical evaluation on Human3.6M and HHOI dataset demonstrates the advantage of combining global 2D skeleton and local image patches for depth prediction, and our superior quantitative and qualitative performance relative to state-of-the-art methods.

PDF Abstract

Datasets


Results from the Paper


Ranked #113 on 3D Human Pose Estimation on Human3.6M (PA-MPJPE metric)

     Get a GitHub badge

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
3D Human Pose Estimation Human3.6M Nie PA-MPJPE 79.5 # 113

Methods


No methods listed for this paper. Add relevant methods here