3D Object Reconstruction From A Single Image
11 papers with code • 2 benchmarks • 2 datasets
Image: Fan et al
Latest papers
TripoSR: Fast 3D Object Reconstruction from a Single Image
This technical report introduces TripoSR, a 3D reconstruction model leveraging transformer architecture for fast feed-forward 3D generation, producing 3D mesh from a single image in under 0. 5 seconds.
Share With Thy Neighbors: Single-View Reconstruction by Cross-Instance Consistency
Approaches for single-view reconstruction typically rely on viewpoint annotations, silhouettes, the absence of background, multiple views of the same instance, a template shape, or symmetry.
SDF-SRN: Learning Signed Distance 3D Object Reconstruction from Static Images
Dense 3D object reconstruction from a single image has recently witnessed remarkable advances, but supervising neural networks with ground-truth 3D shapes is impractical due to the laborious process of creating paired image-shape datasets.
From Image Collections to Point Clouds with Self-supervised Shape and Pose Networks
We learn both 3D point cloud reconstruction and pose estimation networks in a self-supervised manner, making use of differentiable point cloud renderer to train with 2D supervision.
ARCH: Animatable Reconstruction of Clothed Humans
In this paper, we propose ARCH (Animatable Reconstruction of Clothed Humans), a novel end-to-end framework for accurate reconstruction of animation-ready 3D clothed humans from a monocular image.
PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization
Although current approaches have demonstrated the potential in real world settings, they still fail to produce reconstructions with the level of detail often present in the input images.
Self-supervised 3D Shape and Viewpoint Estimation from Single Images for Robotics
We present a convolutional neural network for joint 3D shape prediction and viewpoint estimation from a single input image.
Occlusion-Net: 2D/3D Occluded Keypoint Localization Using Graph Networks
Central to this work is a trifocal tensor loss that provides indirect self-supervision for occluded keypoint locations that are visible in other views of the object.
PIFu: Pixel-Aligned Implicit Function for High-Resolution Clothed Human Digitization
We introduce Pixel-aligned Implicit Function (PIFu), a highly effective implicit representation that locally aligns pixels of 2D images with the global context of their corresponding 3D object.
A Point Set Generation Network for 3D Object Reconstruction from a Single Image
Our final solution is a conditional shape sampler, capable of predicting multiple plausible 3D point clouds from an input image.