3D Reconstruction
654 papers with code • 10 benchmarks • 57 datasets
3D Reconstruction is the task of creating a 3D model or representation of an object or scene from 2D images or other data sources. The goal of 3D reconstruction is to create a virtual representation of an object or scene that can be used for a variety of purposes, such as visualization, animation, simulation, and analysis. It can be used in fields such as computer vision, robotics, and virtual reality.
Image: Gwak et al
Subtasks
Most implemented papers
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.
3D-R2N2: A Unified Approach for Single and Multi-view 3D Object Reconstruction
Inspired by the recent success of methods that employ shape priors to achieve robust 3D reconstructions, we propose a novel recurrent neural network architecture that we call the 3D Recurrent Reconstruction Neural Network (3D-R2N2).
Kimera: an Open-Source Library for Real-Time Metric-Semantic Localization and Mapping
We provide an open-source C++ library for real-time metric-semantic visual-inertial Simultaneous Localization And Mapping (SLAM).
The Double Sphere Camera Model
We evaluate the model using a calibration dataset with several different lenses and compare the models using the metrics that are relevant for Visual Odometry, i. e., reprojection error, as well as computation time for projection and unprojection functions and their Jacobians.
Occupancy Networks: Learning 3D Reconstruction in Function Space
With the advent of deep neural networks, learning-based approaches for 3D reconstruction have gained popularity.
PCRNet: Point Cloud Registration Network using PointNet Encoding
PointNet has recently emerged as a popular representation for unstructured point cloud data, allowing application of deep learning to tasks such as object detection, segmentation and shape completion.
Convolutional Occupancy Networks
Recently, implicit neural representations have gained popularity for learning-based 3D reconstruction.
EPP-MVSNet: Epipolar-Assembling Based Depth Prediction for Multi-View Stereo
As a result, we achieve promising results on all datasets and the highest F-Score on the online TNT intermediate benchmark.
MVSNet: Depth Inference for Unstructured Multi-view Stereo
We present an end-to-end deep learning architecture for depth map inference from multi-view images.
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.