3D Scene Reconstruction
23 papers with code • 0 benchmarks • 3 datasets
Creating 3D scene either using conventional SFM pipelines or latest deep learning approaches.
These leaderboards are used to track progress in 3D Scene Reconstruction
Most implemented papers
The Replica Dataset: A Digital Replica of Indoor Spaces
We introduce Replica, a dataset of 18 highly photo-realistic 3D indoor scene reconstructions at room and building scale.
CoReNet: Coherent 3D scene reconstruction from a single RGB image
Furthermore, we adapt our model to address the harder task of reconstructing multiple objects from a single image.
NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video
We present a novel framework named NeuralRecon for real-time 3D scene reconstruction from a monocular video.
Neural RGB->D Sensing: Depth and Uncertainty from a Video Camera
Depth sensing is crucial for 3D reconstruction and scene understanding.
Atlas: End-to-End 3D Scene Reconstruction from Posed Images
Traditional approaches to 3D reconstruction rely on an intermediate representation of depth maps prior to estimating a full 3D model of a scene.
GRF: Learning a General Radiance Field for 3D Representation and Rendering
We present a simple yet powerful neural network that implicitly represents and renders 3D objects and scenes only from 2D observations.
Learning to Recover 3D Scene Shape from a Single Image
Despite significant progress in monocular depth estimation in the wild, recent state-of-the-art methods cannot be used to recover accurate 3D scene shape due to an unknown depth shift induced by shift-invariant reconstruction losses used in mixed-data depth prediction training, and possible unknown camera focal length.
RetrievalFuse: Neural 3D Scene Reconstruction with a Database
3D reconstruction of large scenes is a challenging problem due to the high-complexity nature of the solution space, in particular for generative neural networks.
TransformerFusion: Monocular RGB Scene Reconstruction using Transformers
We introduce TransformerFusion, a transformer-based 3D scene reconstruction approach.
Panoptic 3D Scene Reconstruction From a Single RGB Image
Inspired by 2D panoptic segmentation, we propose to unify the tasks of geometric reconstruction, 3D semantic segmentation, and 3D instance segmentation into the task of panoptic 3D scene reconstruction - from a single RGB image, predicting the complete geometric reconstruction of the scene in the camera frustum of the image, along with semantic and instance segmentations.