SiTH: Single-view Textured Human Reconstruction with Image-Conditioned Diffusion

CVPR 2024  ยท  Hsuan-I Ho, Jie Song, Otmar Hilliges ยท

A long-standing goal of 3D human reconstruction is to create lifelike and fully detailed 3D humans from single-view images. The main challenge lies in inferring unknown body shapes, appearances, and clothing details in areas not visible in the images. To address this, we propose SiTH, a novel pipeline that uniquely integrates an image-conditioned diffusion model into a 3D mesh reconstruction workflow. At the core of our method lies the decomposition of the challenging single-view reconstruction problem into generative hallucination and reconstruction subproblems. For the former, we employ a powerful generative diffusion model to hallucinate unseen back-view appearance based on the input images. For the latter, we leverage skinned body meshes as guidance to recover full-body texture meshes from the input and back-view images. SiTH requires as few as 500 3D human scans for training while maintaining its generality and robustness to diverse images. Extensive evaluations on two 3D human benchmarks, including our newly created one, highlighted our method's superior accuracy and perceptual quality in 3D textured human reconstruction. Our code and evaluation benchmark are available at https://ait.ethz.ch/sith

PDF Abstract CVPR 2024 PDF CVPR 2024 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
3D Human Reconstruction 4D-DRESS SiTH_Outer Chamfer (cm) 2.322 # 8
Normal Consistency 0.794 # 11
IoU 0.749 # 13
3D Human Reconstruction 4D-DRESS SiTH_Inner Chamfer (cm) 2.110 # 7
Normal Consistency 0.824 # 7
IoU 0.755 # 10
3D Human Reconstruction CustomHumans SiTH Chamfer Distance P-to-S 1.871 # 1
Chamfer Distance S-to-P 2.045 # 1
Normal Consistency 0.826 # 1
f-Score 37.029 # 2
Lifelike 3D Human Generation THuman2.0 Dataset SiTH CLIP Similarity 0.8978 # 3
SSIM 0.8963 # 2
LPIPS 0.1396 # 3
PSNR 17.0533 # 3

Methods