AdaBins: Depth Estimation using Adaptive Bins

We address the problem of estimating a high quality dense depth map from a single RGB input image. We start out with a baseline encoder-decoder convolutional neural network architecture and pose the question of how the global processing of information can help improve overall depth estimation. To this end, we propose a transformer-based architecture block that divides the depth range into bins whose center value is estimated adaptively per image. The final depth values are estimated as linear combinations of the bin centers. We call our new building block AdaBins. Our results show a decisive improvement over the state-of-the-art on several popular depth datasets across all metrics. We also validate the effectiveness of the proposed block with an ablation study and provide the code and corresponding pre-trained weights of the new state-of-the-art model.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Monocular Depth Estimation KITTI Eigen split AdaBins absolute relative error 0.058 # 28
RMSE 2.360 # 28
RMSE log 0.088 # 27
Delta < 1.25 0.964 # 28
Delta < 1.25^2 0.995 # 27
Delta < 1.25^3 0.999 # 10
Monocular Depth Estimation NYU-Depth V2 AdaBins RMSE 0.364 # 41
absolute relative error 0.103 # 40
Delta < 1.25 0.903 # 41
Delta < 1.25^2 0.984 # 39
Delta < 1.25^3 0.997 # 26
log 10 0.044 # 40
Depth Estimation NYU-Depth V2 AdaBins RMS 0.364 # 7

Methods