3D Scene Reconstruction
40 papers with code • 0 benchmarks • 3 datasets
Creating 3D scene either using conventional SFM pipelines or latest deep learning approaches.
Benchmarks
These leaderboards are used to track progress in 3D Scene Reconstruction
Latest papers
Know Your Neighbors: Improving Single-View Reconstruction via Spatial Vision-Language Reasoning
We propose KYN, a novel method for single-view scene reconstruction that reasons about semantic and spatial context to predict each point's density.
Total-Decom: Decomposed 3D Scene Reconstruction with Minimal Interaction
Scene reconstruction from multi-view images is a fundamental problem in computer vision and graphics.
BAD-Gaussians: Bundle Adjusted Deblur Gaussian Splatting
In this paper, we introduce a novel approach, named BAD-Gaussians (Bundle Adjusted Deblur Gaussian Splatting), which leverages explicit Gaussian representation and handles severe motion-blurred images with inaccurate camera poses to achieve high-quality scene reconstruction.
An evaluation of Deep Learning based stereo dense matching dataset shift from aerial images and a large scale stereo dataset
To address this challenge, we propose a method for generating ground-truth disparity maps directly from Light Detection and Ranging (LiDAR) and images to produce a large and diverse dataset for six aerial datasets across four different areas and two areas with different resolution images.
PC-NeRF: Parent-Child Neural Radiance Fields Using Sparse LiDAR Frames in Autonomous Driving Environments
With extensive experiments, PC-NeRF is proven to achieve high-precision novel LiDAR view synthesis and 3D reconstruction in large-scale scenes.
SlimmeRF: Slimmable Radiance Fields
To this end, we present SlimmeRF, a model that allows for instant test-time trade-offs between model size and accuracy through slimming, thus making the model simultaneously suitable for scenarios with different computing budgets.
Open-Fusion: Real-time Open-Vocabulary 3D Mapping and Queryable Scene Representation
Open-Fusion harnesses the power of a pre-trained vision-language foundation model (VLFM) for open-set semantic comprehension and employs the Truncated Signed Distance Function (TSDF) for swift 3D scene reconstruction.
PC-NeRF: Parent-Child Neural Radiance Fields under Partial Sensor Data Loss in Autonomous Driving Environments
Reconstructing large-scale 3D scenes is essential for autonomous vehicles, especially when partial sensor data is lost.
Dense 2D-3D Indoor Prediction with Sound via Aligned Cross-Modal Distillation
Sound can convey significant information for spatial reasoning in our daily lives.
Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion
Such promise and potential of event cameras and NeRF inspired recent works to investigate on the reconstruction of NeRF from moving event cameras.