Monocular Depth Estimation

398 papers with code • 25 benchmarks • 32 datasets

Monocular Depth Estimation is the task of estimating the depth value (distance relative to the camera) of each pixel given a single (monocular) RGB image. This challenging task is a key prerequisite for determining scene understanding for applications such as 3D scene reconstruction, autonomous driving, and AR. State-of-the-art methods usually fall into one of two categories: designing a complex network that is powerful enough to directly regress the depth map, or splitting the input into bins or windows to reduce computational complexity. The most popular benchmarks are the KITTI and NYUv2 datasets. Models are typically evaluated using RMSE or absolute relative error.

Source: Defocus Deblurring Using Dual-Pixel Data

Libraries

Use these libraries to find Monocular Depth Estimation models and implementations

Most implemented papers

High Quality Monocular Depth Estimation via Transfer Learning

ialhashim/DenseDepth 31 Dec 2018

Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction.

DINOv2: Learning Robust Visual Features without Supervision

facebookresearch/dinov2 14 Apr 2023

The recent breakthroughs in natural language processing for model pretraining on large quantities of data have opened the way for similar foundation models in computer vision.

Deeper Depth Prediction with Fully Convolutional Residual Networks

iro-cp/FCRN-DepthPrediction 1 Jun 2016

This paper addresses the problem of estimating the depth map of a scene given a single RGB image.

Unsupervised Monocular Depth Estimation with Left-Right Consistency

mrharicot/monodepth CVPR 2017

Learning based methods have shown very promising results for the task of depth estimation in single images.

Digging Into Self-Supervised Monocular Depth Estimation

nianticlabs/monodepth2 4 Jun 2018

Per-pixel ground-truth depth data is challenging to acquire at scale.

Vision Transformers for Dense Prediction

isl-org/DPT ICCV 2021

We introduce dense vision transformers, an architecture that leverages vision transformers in place of convolutional networks as a backbone for dense prediction tasks.

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-shot Cross-dataset Transfer

intel-isl/MiDaS 2 Jul 2019

In particular, we propose a robust training objective that is invariant to changes in depth range and scale, advocate the use of principled multi-objective learning to combine data from different sources, and highlight the importance of pretraining encoders on auxiliary tasks.

From Big to Small: Multi-Scale Local Planar Guidance for Monocular Depth Estimation

cogaplex-bts/bts 24 Jul 2019

We show that the proposed method outperforms the state-of-the-art works with significant margin evaluating on challenging benchmarks.

AdaBins: Depth Estimation using Adaptive Bins

shariqfarooq123/AdaBins CVPR 2021

We address the problem of estimating a high quality dense depth map from a single RGB input image.