LCR-Net: Localization-Classification-Regression for Human Pose

We propose an end-to-end architecture for joint 2D and 3D human pose estimation in natural images. Key to our approach is the generation and scoring of a number of pose proposals per image, which allows us to predict 2D and 3D pose of multiple people simultaneously. Hence, our approach does not require an approximate localization of the humans for initialization. Our architecture, named LCR-Net, contains 3 main components: 1) the pose proposal generator that suggests potential poses at different locations in the image; 2) a classifier that scores the different pose proposals; and 3) a regressor that refines pose proposals both in 2D and 3D. All three stages share the convolutional feature layers and are trained jointly. The final pose estimation is obtained by integrating over neighboring pose hypotheses, which is shown to improve over a standard non maximum suppression algorithm. Our approach significantly outperforms the state of the art in 3D pose estimation on Human3.6M, a controlled environment. Moreover, it shows promising results on real images for both single and multi-person subsets of the MPII 2D pose benchmark.

PDF Abstract

Results from the Paper

Results from Other Papers

Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
3D Human Pose Estimation Human3.6M LCR-Net Average MPJPE (mm) 87.7 # 249
PA-MPJPE 71.6 # 86
3D Multi-Person Pose Estimation (root-relative) MuPoTS-3D LCR-Net MPJPE 146 # 4


No methods listed for this paper. Add relevant methods here