Depth Prediction
175 papers with code • 3 benchmarks • 4 datasets
Libraries
Use these libraries to find Depth Prediction models and implementationsDatasets
Most implemented papers
Towards Better Generalization: Joint Depth-Pose Learning without PoseNet
In this work, we tackle the essential problem of scale inconsistency for self-supervised joint depth-pose learning.
S2R-DepthNet: Learning a Generalizable Depth-specific Structural Representation
S2R-DepthNet consists of: a) a Structure Extraction (STE) module which extracts a domaininvariant structural representation from an image by disentangling the image into domain-invariant structure and domain-specific style components, b) a Depth-specific Attention (DSA) module, which learns task-specific knowledge to suppress depth-irrelevant structures for better depth estimation and generalization, and c) a depth prediction module (DP) to predict depth from the depth-specific representation.
U-HRNet: Delving into Improving Semantic Representation of High Resolution Network for Dense Prediction
Therefore, we designed a U-shaped High-Resolution Network (U-HRNet), which adds more stages after the feature map with strongest semantic representation and relaxes the constraint in HRNet that all resolutions need to be calculated parallel for a newly added stage.
Multi-View Silhouette and Depth Decomposition for High Resolution 3D Object Representation
We consider the problem of scaling deep generative shape models to high-resolution.
MegaDepth: Learning Single-View Depth Prediction from Internet Photos
We validate the use of large amounts of Internet data by showing that models trained on MegaDepth exhibit strong generalization-not only to novel scenes, but also to other diverse datasets including Make3D, KITTI, and DIW, even when no images from those datasets are seen during training.
Enforcing geometric constraints of virtual normal for depth prediction
Monocular depth prediction plays a crucial role in understanding 3D scene geometry.
Virtual Normal: Enforcing Geometric Constraints for Accurate and Robust Depth Prediction
In this work, we show the importance of the high-order 3D geometric constraints for depth prediction.
Self-Supervised Learning from Images with a Joint-Embedding Predictive Architecture
This paper demonstrates an approach for learning highly semantic image representations without relying on hand-crafted data-augmentations.
StereoNet: Guided Hierarchical Refinement for Real-Time Edge-Aware Depth Prediction
A first estimate of the disparity is computed in a very low resolution cost volume, then hierarchically the model re-introduces high-frequency details through a learned upsampling function that uses compact pixel-to-pixel refinement networks.
Geo-Supervised Visual Depth Prediction
We propose using global orientation from inertial measurements, and the bias it induces on the shape of objects populating the scene, to inform visual 3D reconstruction.