PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization

Recent advances in image-based 3D human shape estimation have been driven by the significant improvement in representation power afforded by deep neural networks. Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images. We argue that this limitation stems primarily form two conflicting requirements; accurate predictions require large context, but precise predictions require high resolution. Due to memory limitations in current hardware, previous approaches tend to take low resolution images as input to cover large spatial context, and produce less precise (or low resolution) 3D estimates as a result. We address this limitation by formulating a multi-level architecture that is end-to-end trainable. A coarse level observes the whole image at lower resolution and focuses on holistic reasoning. This provides context to an fine level which estimates highly detailed geometry by observing higher-resolution images. We demonstrate that our approach significantly outperforms existing state-of-the-art techniques on single image human shape reconstruction by fully leveraging 1k-resolution input images.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
3D Object Reconstruction From A Single Image BUFF ML-PIFu (end-to-end) Point-to-surface distance (cm) 0.25 # 1
Chamfer (cm) 1.525 # 4
Surface normal consistency 0.22 # 5
3D Object Reconstruction From A Single Image BUFF ML-PIFu (alternate) Point-to-surface distance (cm) 1.63 # 5
Chamfer (cm) 1.73 # 5
Surface normal consistency 0.133 # 4
3D Human Reconstruction CAPE PIFuHD Chamfer (cm) 3.237 # 3
P2S (cm) 3.123 # 4
NC 0.112 # 3
3D Object Reconstruction From A Single Image RenderPeople ML-PIFu (end-to-end) Chamfer (cm) 1.525 # 3


No methods listed for this paper. Add relevant methods here