Depth Prediction
154 papers with code • 1 benchmarks • 2 datasets
Libraries
Use these libraries to find Depth Prediction models and implementationsMost implemented papers
Deeper Depth Prediction with Fully Convolutional Residual Networks
This paper addresses the problem of estimating the depth map of a scene given a single RGB image.
Unsupervised Monocular Depth Estimation with Left-Right Consistency
Learning based methods have shown very promising results for the task of depth estimation in single images.
From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation
We show that the proposed method outperforms the state-of-the-art works with significant margin evaluating on challenging benchmarks.
Sparse-to-Dense: Depth Prediction from Sparse Depth Samples and a Single Image
We consider the problem of dense depth prediction from a sparse set of depth measurements and a single RGB image.
Unsupervised Monocular Depth Learning in Dynamic Scenes
We present a method for jointly training the estimation of depth, ego-motion, and a dense 3D translation field of objects relative to the scene, with monocular photometric consistency being the sole source of supervision.
EPP-MVSNet: Epipolar-Assembling Based Depth Prediction for Multi-View Stereo
As a result, we achieve promising results on all datasets and the highest F-Score on the online TNT intermediate benchmark.
Predicting Depth, Surface Normals and Semantic Labels with a Common Multi-Scale Convolutional Architecture
In this paper we address three different computer vision tasks using a single basic architecture: depth prediction, surface normal estimation, and semantic labeling.
Fast Neural Architecture Search of Compact Semantic Segmentation Models via Auxiliary Cells
While most results in this domain have been achieved on image classification and language modelling problems, here we concentrate on dense per-pixel tasks, in particular, semantic image segmentation using fully convolutional networks.
Depth from Videos in the Wild: Unsupervised Monocular Depth Learning from Unknown Cameras
We present a novel method for simultaneous learning of depth, egomotion, object motion, and camera intrinsics from monocular videos, using only consistency across neighboring video frames as supervision signal.
3D Ken Burns Effect from a Single Image
According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions.