MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer

Self-supervised monocular depth estimation is an attractive solution that does not require hard-to-source depth labels for training. Convolutional neural networks (CNNs) have recently achieved great success in this task. However, their limited receptive field constrains existing network architectures to reason only locally, dampening the effectiveness of the self-supervised paradigm. In the light of the recent successes achieved by Vision Transformers (ViTs), we propose MonoViT, a brand-new framework combining the global reasoning enabled by ViT models with the flexibility of self-supervised monocular depth estimation. By combining plain convolutions with Transformer blocks, our model can reason locally and globally, yielding depth prediction at a higher level of detail and accuracy, allowing MonoViT to achieve state-of-the-art performance on the established KITTI dataset. Moreover, MonoViT proves its superior generalization capacities on other datasets such as Make3D and DrivingStereo.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Depth Estimation KITTI MonoViT absolute relative error 0.093 # 1
Unsupervised Monocular Depth Estimation KITTI-C MonoViT Absolute relative error (AbsRel) 0.161 # 3
SqRel 1.292 # 3
RMSE 6.029 # 3
RMSE log 0.247 # 3
a1 0.768 # 3
a2 0.915 # 4
a3 0.964 # 4
Monocular Depth Estimation KITTI Eigen split unsupervised MonoViT(MS+1024x320) absolute relative error 0.093 # 5
RMSE 4.202 # 6
Sq Rel 0.671 # 7
RMSE log 0.169 # 3
Delta < 1.25 0.912 # 3
Delta < 1.25^2 0.969 # 3
Delta < 1.25^3 0.985 # 2
Resolution 1024x320 # 1
Mono X # 1

Methods