Self-Supervised 3D Human Pose Estimation with Multiple-View Geometry

17 Aug 2021  ·  Arij Bouazizi, Julian Wiederer, Ulrich Kressel, Vasileios Belagiannis ·

We present a self-supervised learning algorithm for 3D human pose estimation of a single person based on a multiple-view camera system and 2D body pose estimates for each view. To train our model, represented by a deep neural network, we propose a four-loss function learning algorithm, which does not require any 2D or 3D body pose ground-truth. The proposed loss functions make use of the multiple-view geometry to reconstruct 3D body pose estimates and impose body pose constraints across the camera views. Our approach utilizes all available camera views during training, while the inference is single-view. In our evaluations, we show promising performance on Human3.6M and HumanEva benchmarks, while we also present a generalization study on MPI-INF-3DHP dataset, as well as several ablation results. Overall, we outperform all self-supervised learning methods and reach comparable results to supervised and weakly-supervised learning approaches. Our code and models are publicly available

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Weakly-supervised 3D Human Pose Estimation Human3.6M 2D-3D Lifting self-supervised Average MPJPE (mm) 62.0 # 12
Number of Views 1 # 1
Number of Frames Per View 1 # 1
3D Annotations No # 1
3D Human Pose Estimation Human3.6M 2D-3D Lifting self-supervised Average MPJPE (mm) 62.0 # 256
Using 2D ground-truth joints No # 2
Multi-View or Monocular Multi-View # 1

Methods


No methods listed for this paper. Add relevant methods here