Monocular Depth Estimation using Diffusion Models

28 Feb 2023  ·  Saurabh Saxena, Abhishek Kar, Mohammad Norouzi, David J. Fleet ·

We formulate monocular depth estimation using denoising diffusion models, inspired by their recent successes in high fidelity image generation. To that end, we introduce innovations to address problems arising due to noisy, incomplete depth maps in training data, including step-unrolled denoising diffusion, an $L_1$ loss, and depth infilling during training. To cope with the limited availability of data for supervised training, we leverage pre-training on self-supervised image-to-image translation tasks. Despite the simplicity of the approach, with a generic loss and architecture, our DepthGen model achieves SOTA performance on the indoor NYU dataset, and near SOTA results on the outdoor KITTI dataset. Further, with a multimodal posterior, DepthGen naturally represents depth ambiguity (e.g., from transparent surfaces), and its zero-shot performance combined with depth imputation, enable a simple but effective text-to-3D pipeline. Project page: https://depth-gen.github.io

PDF Abstract

Results from the Paper


Ranked #22 on Monocular Depth Estimation on NYU-Depth V2 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Monocular Depth Estimation NYU-Depth V2 DepthGen RMSE 0.314 # 22
absolute relative error 0.074 # 14
Delta < 1.25 0.946 # 18
Delta < 1.25^2 0.987 # 32
Delta < 1.25^3 0.996 # 37
log 10 0.032 # 14

Methods