Monocular Depth Estimation through Virtual-world Supervision and Real-world SfM Self-Supervision

22 Mar 2021  ·  Akhil Gurram, Ahmet Faruk Tuna, Fengyi Shen, Onay Urfalioglu, Antonio M. López ·

Depth information is essential for on-board perception in autonomous driving and driver assistance. Monocular depth estimation (MDE) is very appealing since it allows for appearance and depth being on direct pixelwise correspondence without further calibration. Best MDE models are based on Convolutional Neural Networks (CNNs) trained in a supervised manner, i.e., assuming pixelwise ground truth (GT). Usually, this GT is acquired at training time through a calibrated multi-modal suite of sensors. However, also using only a monocular system at training time is cheaper and more scalable. This is possible by relying on structure-from-motion (SfM) principles to generate self-supervision. Nevertheless, problems of camouflaged objects, visibility changes, static-camera intervals, textureless areas, and scale ambiguity, diminish the usefulness of such self-supervision. In this paper, we perform monocular depth estimation by virtual-world supervision (MonoDEVS) and real-world SfM self-supervision. We compensate the SfM self-supervision limitations by leveraging virtual-world images with accurate semantic and depth supervision and addressing the virtual-to-real domain gap. Our MonoDEVSNet outperforms previous MDE CNNs trained on monocular and even stereo sequences.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Depth Estimation KITTI Eigen split MonoDELSNet absolute relative error 0.053 # 4
RMSE 2.101 # 2
Sq Rel 0.161 # 3
RMSE log 0.082 # 4
Delta < 1.25 0.969 # 4
Delta < 1.25^2 0.996 # 4
Delta < 1.25^3 0.999 # 1
Monocular Depth Estimation KITTI Eigen split unsupervised MonoDEVSNet absolute relative error 0.101 # 8
RMSE 4.413 # 6
Sq Rel 0.703 # 8
Delta < 1.25 0.882 # 7
Delta < 1.25^2 0.962 # 7


No methods listed for this paper. Add relevant methods here