3D Human Reconstruction
16 papers with code • 3 benchmarks • 5 datasets
Most existing monocular 3D pose estimation approaches only focus on a single body part, neglecting the fact that the essential nuance of human motion is conveyed through a concert of subtle movements of face, hands, and body.
To construct FrankMocap, we build the state-of-the-art monocular 3D "hand" motion capture method by taking the hand part of the whole body parametric model (SMPL-X).
I2L-MeshNet: Image-to-Lixel Prediction Network for Accurate 3D Human Pose and Mesh Estimation from a Single RGB Image
Most of the previous image-based 3D human pose and mesh estimation methods estimate parameters of the human mesh model from an input image.
Ranked #3 on 3D Hand Pose Estimation on FreiHAND
To understand how people look, interact, or perform tasks, we need to quickly and accurately capture their 3D body, face, and hands together from an RGB image.
Our goal is to train a single network that learns to avoid these problems and generate a coherent 3D reconstruction of all the humans in the scene.
Ranked #1 on 3D Human Reconstruction on AGORA
A key challenge of learning a visual representation for the 3D high fidelity geometry of dressed humans lies in the limited availability of the ground truth data (e. g., 3D scanned models), which results in the performance degradation of 3D human reconstruction when applying to real-world imagery.
Regression-based methods have recently shown promising results in reconstructing human meshes from monocular images.
Ranked #15 on 3D Human Pose Estimation on 3DPW (using extra training data)