Coherent Reconstruction of Multiple Humans from a Single Image

In this work, we address the problem of multi-person 3D pose estimation from a single image. A typical regression approach in the top-down setting of this problem would first detect all humans and then reconstruct each one of them independently. However, this type of prediction suffers from incoherent results, e.g., interpenetration and inconsistent depth ordering between the people in the scene. Our goal is to train a single network that learns to avoid these problems and generate a coherent 3D reconstruction of all the humans in the scene. To this end, a key design choice is the incorporation of the SMPL parametric body model in our top-down framework, which enables the use of two novel losses. First, a distance field-based collision loss penalizes interpenetration among the reconstructed people. Second, a depth ordering-aware loss reasons about occlusions and promotes a depth ordering of people that leads to a rendering which is consistent with the annotated instance segmentation. This provides depth supervision signals to the network, even if the image has no explicit 3D annotations. The experiments show that our approach outperforms previous methods on standard 3D pose benchmarks, while our proposed losses enable more coherent reconstruction in natural images. The project website with videos, results, and code can be found at: https://jiangwenpl.github.io/multiperson

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
3D Human Reconstruction AGORA PIXIE FB-NMVE 233.9 # 2
B-NMVE 173.4 # 1
FB-NMJE 230.9 # 2
B-NMJE 171.1 # 1
FB-MVE 191.8 # 2
B-MVE 142.2 # 1
F-MVE 50.2 # 2
LH/RH-MVE 49.5/49.0 # 1
FB-MPJPE 189.3 # 2
B-MPJPE 140.3 # 1
F-MPJPE 54.5 # 2
LH/RH-MPJPE 46.4/46.0 # 1
3D Depth Estimation Relative Human CRMH PCDR 54.83 # 3
PCDR-Baby 34.74 # 2
PCDR-Kid 48.37 # 3
PCDR-Teen 59.11 # 2
PCDR-Adult 55.47 # 2
mPCDK 0.781 # 3

Methods


No methods listed for this paper. Add relevant methods here