3D Reconstruction
534 papers with code • 8 benchmarks • 54 datasets
3D Reconstruction is the task of creating a 3D model or representation of an object or scene from 2D images or other data sources. The goal of 3D reconstruction is to create a virtual representation of an object or scene that can be used for a variety of purposes, such as visualization, animation, simulation, and analysis. It can be used in fields such as computer vision, robotics, and virtual reality.
Image: Gwak et al
Benchmarks
These leaderboards are used to track progress in 3D Reconstruction
Subtasks
Latest papers
GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks while requiring less than half the memory storage of Gaussian Splatting and increasing the rendering speed by up to 39%.
PC-NeRF: Parent-Child Neural Radiance Fields Using Sparse LiDAR Frames in Autonomous Driving Environments
With extensive experiments, PC-NeRF is proven to achieve high-precision novel LiDAR view synthesis and 3D reconstruction in large-scale scenes.
Camera Calibration through Geometric Constraints from Rotation and Projection Matrices
The process of camera calibration involves estimating the intrinsic and extrinsic parameters, which are essential for accurately performing tasks such as 3D reconstruction, object tracking and augmented reality.
EscherNet: A Generative Model for Scalable View Synthesis
We introduce EscherNet, a multi-view conditioned diffusion model for view synthesis.
DeepAAT: Deep Automated Aerial Triangulation for Fast UAV-based Mapping
The experimental results demonstrate DeepAAT's substantial improvements over conventional AAT methods, highlighting its potential in the efficiency and accuracy of UAV-based 3D reconstruction tasks.
Local Feature Matching Using Deep Learning: A Survey
The objective of this endeavor is to furnish a comprehensive overview of local feature matching methods.
OmniSCV: An Omnidirectional Synthetic Image Generator for Computer Vision
In this paper, we present a tool for generating datasets of omnidirectional images with semantic and depth information.
3D Reconstruction and New View Synthesis of Indoor Environments based on a Dual Neural Radiance Field
One of the innovative features of Du-NeRF is that it decouples a view-independent component from the density field and uses it as a label to supervise the learning process of the SDF field.
pix2gestalt: Amodal Segmentation by Synthesizing Wholes
We introduce pix2gestalt, a framework for zero-shot amodal segmentation, which learns to estimate the shape and appearance of whole objects that are only partially visible behind occlusions.
Range-Agnostic Multi-View Depth Estimation With Keyframe Selection
Methods for 3D reconstruction from posed frames require prior knowledge about the scene metric range, usually to recover matching cues along the epipolar lines and narrow the search range.