3D Shape Reconstruction
44 papers with code • 2 benchmarks • 4 datasets
Most implemented papers
Multi-Garment Net: Learning to Dress 3D People from Images
We present Multi-Garment Network (MGN), a method to predict body shape and clothing, layered on top of the SMPL model from a few frames (1-8) of a video.
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.
VIBE: Video Inference for Human Body Pose and Shape Estimation
Human motion is fundamental to understanding behavior.
TailorNet: Predicting Clothing in 3D as a Function of Human Pose, Shape and Garment Style
While the low-frequency component is predicted from pose, shape and style parameters with an MLP, the high-frequency component is predicted with a mixture of shape-style specific pose models.
3D Shape Reconstruction from Sketches via Multi-view Convolutional Networks
The decoder converts this representation into depth and normal maps capturing the underlying surface from several output viewpoints.
Matryoshka Networks: Predicting 3D Geometry via Nested Shape Layers
We scale this baseline to higher resolutions by proposing a memory-efficient shape encoding, which recursively decomposes a 3D shape into nested shape layers, similar to the pieces of a Matryoshka doll.
PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization
Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images.
Multi-modal 3D Shape Reconstruction Under Calibration Uncertainty using Parametric Level Set Methods
This method not only allows us to analytically and compactly represent the object, it also confers on us the ability to overcome calibration related noise that originates from inaccurate acquisition parameters.
3D Reconstruction of Novel Object Shapes from Single Images
This is challenging as it requires a model to learn a representation that can infer both the visible and occluded portions of any object using a limited training set.
3D Human Shape and Pose from a Single Low-Resolution Image with Self-Supervised Learning
3D human shape and pose estimation from monocular images has been an active area of research in computer vision, having a substantial impact on the development of new applications, from activity recognition to creating virtual avatars.