Unsupervised Scale-consistent Depth Learning from Video

We propose a monocular depth estimator SC-Depth, which requires only unlabelled videos for training and enables the scale-consistent prediction at inference time. Our contributions include: (i) we propose a geometry consistency loss, which penalizes the inconsistency of predicted depths between adjacent views; (ii) we propose a self-discovered mask to automatically localize moving objects that violate the underlying static scene assumption and cause noisy signals during training; (iii) we demonstrate the efficacy of each component with a detailed ablation study and show high-quality depth estimation results in both KITTI and NYUv2 datasets. Moreover, thanks to the capability of scale-consistent prediction, we show that our monocular-trained deep networks are readily integrated into the ORB-SLAM2 system for more robust and accurate tracking. The proposed hybrid Pseudo-RGBD SLAM shows compelling results in KITTI, and it generalizes well to the KAIST dataset without additional training. Finally, we provide several demos for qualitative evaluation.

PDF Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Monocular Depth Estimation KITTI Eigen split SC-Depth (ResNet 50) absolute relative error 0.114 # 36
RMSE 4.706 # 20
RMSE log 0.191 # 17
Delta < 1.25 0.873 # 18
Delta < 1.25^2 0.960 # 17
Delta < 1.25^3 0.982 # 17
Monocular Depth Estimation KITTI Eigen split SC-Depth (ResNet18) absolute relative error 0.119 # 38
RMSE 4.950 # 22
RMSE log 0.197 # 19
Delta < 1.25 0.863 # 19
Delta < 1.25^2 0.957 # 19
Delta < 1.25^3 0.981 # 18


No methods listed for this paper. Add relevant methods here