PoseRN: A 2D pose refinement network for bias-free multi-view 3D human pose estimation

7 Jul 2021  ·  Akihiko Sayo, Diego Thomas, Hiroshi Kawasaki, Yuta Nakashima, Katsushi Ikeuchi ·

We propose a new 2D pose refinement network that learns to predict the human bias in the estimated 2D pose. There are biases in 2D pose estimations that are due to differences between annotations of 2D joint locations based on annotators' perception and those defined by motion capture (MoCap) systems. These biases are crafted into publicly available 2D pose datasets and cannot be removed with existing error reduction approaches. Our proposed pose refinement network allows us to efficiently remove the human bias in the estimated 2D poses and achieve highly accurate multi-view 3D human pose estimation.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Human Pose Estimation Human3.6M PoseRN Average MPJPE (mm) 38.4 # 66
Using 2D ground-truth joints No # 2
Multi-View or Monocular Multi-View # 1

Methods


No methods listed for this paper. Add relevant methods here