DenseRaC: Joint 3D Pose and Shape Estimation by Dense Render-and-Compare

ICCV 2019  ·  Yuanlu Xu, Song-Chun Zhu, Tony Tung ·

We present DenseRaC, a novel end-to-end framework for jointly estimating 3D human pose and body shape from a monocular RGB image. Our two-step framework takes the body pixel-to-surface correspondence map (i.e., IUV map) as proxy representation and then performs estimation of parameterized human pose and shape. Specifically, given an estimated IUV map, we develop a deep neural network optimizing 3D body reconstruction losses and further integrating a render-and-compare scheme to minimize differences between the input and the rendered output, i.e., dense body landmarks, body part masks, and adversarial priors. To boost learning, we further construct a large-scale synthetic dataset (MOCA) utilizing web-crawled Mocap sequences, 3D scans and animations. The generated data covers diversified camera views, human actions and body shapes, and is paired with full ground truth. Our model jointly learns to represent the 3D human body from hybrid datasets, mitigating the problem of unpaired training data. Our experiments show that DenseRaC obtains superior performance against state of the art on public benchmarks of various humanrelated tasks.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract
No code implementations yet. Submit your code now

Results from the Paper


Ranked #79 on 3D Human Pose Estimation on MPI-INF-3DHP (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
3D Human Pose Estimation Human3.6M DenseRaC Average MPJPE (mm) 76.8 # 289
PA-MPJPE 48 # 89
3D Human Pose Estimation MPI-INF-3DHP DenseRaC AUC 41.1 # 63
MPJPE 114.2 # 79
PCK 76.9 # 70

Methods


No methods listed for this paper. Add relevant methods here