Object Reconstruction
72 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Object Reconstruction
Latest papers
TripoSR: Fast 3D Object Reconstruction from a Single Image
This technical report introduces TripoSR, a 3D reconstruction model leveraging transformer architecture for fast feed-forward 3D generation, producing 3D mesh from a single image in under 0. 5 seconds.
FusionVision: A comprehensive approach of 3D object reconstruction and segmentation from RGB-D cameras using YOLO and fast segment anything
Therefore, this paper introduces FusionVision, an exhaustive pipeline adapted for the robust 3D segmentation of objects in RGB-D imagery.
iFusion: Inverting Diffusion for Pose-Free Reconstruction from Sparse Views
Our strategy unfolds in three steps: (1) We invert the diffusion model for camera pose estimation instead of synthesizing novel views.
Splatter Image: Ultra-Fast Single-View 3D Reconstruction
We introduce the Splatter Image, an ultra-fast approach for monocular 3D object reconstruction which operates at 38 FPS.
Ins-HOI: Instance Aware Human-Object Interactions Recovery
To address this, we further propose a complementary training strategy that leverages synthetic data to introduce instance-level shape priors, enabling the disentanglement of occupancy fields for different instances.
HOLD: Category-agnostic 3D Reconstruction of Interacting Hands and Objects from Video
Since humans interact with diverse objects every day, the holistic 3D capture of these interactions is important to understand and model human behaviour.
HandNeRF: Learning to Reconstruct Hand-Object Interaction Scene from a Single RGB Image
The inference as well as training-data generation for 3D hand-object scene reconstruction is challenging due to the depth ambiguity of a single image and occlusions by the hand and object.
ObjectSDF++: Improved Object-Compositional Neural Implicit Surfaces
Unlike traditional multi-view stereo approaches, the neural implicit surface-based methods leverage neural networks to represent 3D scenes as signed distance functions (SDFs).
A One Stop 3D Target Reconstruction and multilevel Segmentation Method
We extend object tracking and 3D reconstruction algorithms to support continuous segmentation labels to leverage the advances in the 2D image segmentation, especially the Segment-Anything Model (SAM) which uses the pretrained neural network without additional training for new scenes, for 3D object segmentation.
Contact-conditioned hand-held object reconstruction from single-view images
Reconstructing the shape of hand-held objects from single-view color images is a long-standing problem in computer vision and computer graphics.