3D Scene Reconstruction

40 papers with code • 0 benchmarks • 3 datasets

Creating 3D scene either using conventional SFM pipelines or latest deep learning approaches.

Most implemented papers

Learning to Recover 3D Scene Shape from a Single Image

aim-uofa/AdelaiDepth CVPR 2021

Despite significant progress in monocular depth estimation in the wild, recent state-of-the-art methods cannot be used to recover accurate 3D scene shape due to an unknown depth shift induced by shift-invariant reconstruction losses used in mixed-data depth prediction training, and possible unknown camera focal length.

A Pose-only Solution to Visual Reconstruction and Navigation

aibluefisher/graphoptim 2 Mar 2021

Visual navigation and three-dimensional (3D) scene reconstruction are essential for robotics to interact with the surrounding environment.

RetrievalFuse: Neural 3D Scene Reconstruction with a Database

nihalsid/retrieval-fuse ICCV 2021

3D reconstruction of large scenes is a challenging problem due to the high-complexity nature of the solution space, in particular for generative neural networks.

TransformerFusion: Monocular RGB Scene Reconstruction using Transformers

aljazbozic/transformerfusion NeurIPS 2021

We introduce TransformerFusion, a transformer-based 3D scene reconstruction approach.

Panoptic 3D Scene Reconstruction From a Single RGB Image

xheon/panoptic-reconstruction NeurIPS 2021

Inspired by 2D panoptic segmentation, we propose to unify the tasks of geometric reconstruction, 3D semantic segmentation, and 3D instance segmentation into the task of panoptic 3D scene reconstruction - from a single RGB image, predicting the complete geometric reconstruction of the scene in the camera frustum of the image, along with semantic and instance segmentations.

Deblur-NeRF: Neural Radiance Fields from Blurry Images

limacv/Deblur-NeRF CVPR 2022

We demonstrate that our method can be used on both camera motion blur and defocus blur: the two most common types of blur in real scenes.

MonoScene: Monocular 3D Semantic Scene Completion

cv-rits/MonoScene CVPR 2022

MonoScene proposes a 3D Semantic Scene Completion (SSC) framework, where the dense geometry and semantics of a scene are inferred from a single monocular RGB image.

Neural 3D Scene Reconstruction with the Manhattan-world Assumption

zju3dv/manhattan_sdf CVPR 2022

Based on the Manhattan-world assumption, planar constraints are employed to regularize the geometry in floor and wall regions predicted by a 2D semantic segmentation network.

READ: Large-Scale Neural Scene Rendering for Autonomous Driving

JOP-Lee/READ-Large-Scale-Neural-Scene-Rendering-for-Autonomous-Driving 11 May 2022

In this paper, a large-scale neural rendering method is proposed to synthesize the autonomous driving scene~(READ), which makes it possible to synthesize large-scale driving scenarios on a PC through a variety of sampling schemes.

PhotoScene: Photorealistic Material and Lighting Transfer for Indoor Scenes

vilab-ucsd/photoscene CVPR 2022

Most indoor 3D scene reconstruction methods focus on recovering 3D geometry and scene layout.