Unsupervised Geometry-Aware Representation for 3D Human Pose Estimation

ECCV 2018  ·  Helge Rhodin, Mathieu Salzmann, Pascal Fua ·

Modern 3D human pose estimation techniques rely on deep networks, which require large amounts of training data. While weakly-supervised methods require less supervision, by utilizing 2D poses or multi-view imagery without annotations, they still need a sufficiently large set of samples with 3D annotations for learning to succeed. In this paper, we propose to overcome this problem by learning a geometry-aware body representation from multi-view images without annotations. To this end, we use an encoder-decoder that predicts an image from one viewpoint given an image from another viewpoint. Because this representation encodes 3D geometry, using it in a semi-supervised setting makes it easier to learn a mapping from it to 3D human pose. As evidenced by our experiments, our approach significantly outperforms fully-supervised methods given the same amount of labeled data, and improves over other semi-supervised methods while using as little as 1% of the labeled data.

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Weakly-supervised 3D Human Pose Estimation Human3.6M Rhodin et al. Average MPJPE (mm) 131.7 # 29
3D Annotations S1 # 1

Methods


No methods listed for this paper. Add relevant methods here