Depth Completion
75 papers with code • 9 benchmarks • 10 datasets
The Depth Completion task is a sub-problem of depth estimation. In the sparse-to-dense depth completion problem, one wants to infer the dense depth map of a 3-D scene given an RGB image and its corresponding sparse reconstruction in the form of a sparse depth map obtained either from computational methods such as SfM (Strcuture-from-Motion) or active sensors such as lidar or structured light sensors.
Source: LiStereo: Generate Dense Depth Maps from LIDAR and Stereo Imagery , Unsupervised Depth Completion from Visual Inertial Odometry
Datasets
Latest papers
RoofDiffusion: Constructing Roofs from Severely Corrupted Point Data via Diffusion
Accurate completion and denoising of roof height maps are crucial to reconstructing high-quality 3D buildings.
NeSLAM: Neural Implicit Mapping and Self-Supervised Feature Tracking With Depth Completion and Denoising
Second, the occupancy scene representation is replaced with Signed Distance Field (SDF) hierarchical scene representation for high-quality reconstruction and view synthesis.
Bilateral Propagation Network for Depth Completion
Depth completion aims to derive a dense depth map from sparse depth measurements with a synchronized color image.
A Concise but High-performing Network for Image Guided Depth Completion in Autonomous Driving
Depth completion is a crucial task in autonomous driving, aiming to convert a sparse depth map into a dense depth prediction.
Revisiting Depth Completion from a Stereo Matching Perspective for Cross-domain Generalization
This paper proposes a new framework for depth completion robust against domain-shifting issues.
SparseDC: Depth Completion from sparse and non-uniform inputs
The key contributions of SparseDC are two-fold.
What You See Is What You Detect: Towards better Object Densification in 3D detection
Considering that our approach focuses only on the visible part of the foreground objects to achieve accurate 3D detection, we named our method What You See Is What You Detect (WYSIWYD).
G2-MonoDepth: A General Framework of Generalized Depth Inference from Monocular RGB+X Data
This paper investigates a unified task of monocular depth inference, which infers high-quality depth maps from all kinds of input raw data from various robots in unseen scenes.
Revisiting Deformable Convolution for Depth Completion
Our study reveals that, different from prior work, deformable convolution needs to be applied on an estimated depth map with a relatively high density for better performance.
LiDAR Meta Depth Completion
While using a single model, our method yields significantly better results than a non-adaptive baseline trained on different LiDAR patterns.