3D Human Reconstruction
52 papers with code • 9 benchmarks • 14 datasets
Latest papers
SHERF: Generalizable Human NeRF from a Single Image
To this end, we propose a bank of 3D-aware hierarchical features, including global, point-level, and pixel-aligned features, to facilitate informative encoding.
X-Avatar: Expressive Human Avatars
Our method models bodies, hands, facial expressions and appearance in a holistic fashion and can be learned from either full 3D scans or RGB-D data.
Vid2Avatar: 3D Avatar Reconstruction from Videos in the Wild via Self-supervised Scene Decomposition
Specifically, we define a temporally consistent human representation in canonical space and formulate a global optimization over the background model, the canonical human shape and texture, and per-frame human pose parameters.
SEFD: Learning to Distill Complex Pose and Occlusion
This paper addresses the problem of three-dimensional (3D) human mesh estimation in complex poses and occluded situations.
ECON: Explicit Clothed humans Optimized via Normal integration
To increase robustness for these cases, existing work uses an explicit parametric body model to constrain surface reconstruction, but this limits the recovery of free-form surfaces such as loose clothing that deviates from the body.
Super-resolution 3D Human Shape from a Single Low-Resolution Image
The approach overcomes limitations of existing approaches that reconstruct 3D human shape from a single image, which require high-resolution images together with auxiliary data such as surface normal or a parametric model to reconstruct high-detail shape.
Occupancy Planes for Single-view RGB-D Human Reconstruction
Specifically, a set of 3D locations within the view-frustum of the camera are first projected independently onto the image and a corresponding feature is subsequently extracted for each 3D location.
AvatarGen: a 3D Generative Model for Animatable Human Avatars
Unsupervised generation of clothed virtual humans with various appearance and animatable poses is important for creating 3D human avatars and other AR/VR applications.
Accurate 3D Body Shape Regression using Metric and Semantic Attributes
Since paired data with images and 3D body shape are rare, we exploit two sources of information: (1) we collect internet images of diverse "fashion" models together with a small set of anthropometric measurements; (2) we collect linguistic shape attributes for a wide range of 3D body meshes and the model images.
KeypointNeRF: Generalizing Image-based Volumetric Avatars using Relative Spatial Encoding of Keypoints
In this work, we investigate common issues with existing spatial encodings and propose a simple yet highly effective approach to modeling high-fidelity volumetric humans from sparse views.