Unsupervised Scale-consistent Depth and Ego-motion Learning from Monocular Video

Recent work has shown that CNN-based depth and ego-motion estimators can be learned using unlabelled monocular videos. However, the performance is limited by unidentified moving objects that violate the underlying static scene assumption in geometric image reconstruction. More significantly, due to lack of proper constraints, networks output scale-inconsistent results over different samples, i.e., the ego-motion network cannot provide full camera trajectories over a long video sequence because of the per-frame scale ambiguity. This paper tackles these challenges by proposing a geometry consistency loss for scale-consistent predictions and an induced self-discovered mask for handling moving objects and occlusions. Since we do not leverage multi-task learning like recent works, our framework is much simpler and more efficient. Comprehensive evaluation results demonstrate that our depth estimator achieves the state-of-the-art performance on the KITTI dataset. Moreover, we show that our ego-motion network is able to predict a globally scale-consistent camera trajectory for long video sequences, and the resulting visual odometry accuracy is competitive with the recent model that is trained using stereo videos. To the best of our knowledge, this is the first work to show that deep networks trained using unlabelled monocular videos can predict globally scale-consistent camera trajectories over a long video sequence.

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract

Datasets


Results from the Paper


Ranked #35 on Monocular Depth Estimation on KITTI Eigen split (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Monocular Depth Estimation KITTI Eigen split SC-SfMLearner_CS+K absolute relative error 0.128 # 35
Monocular Depth Estimation KITTI Eigen split SC-SfMLearner absolute relative error 0.137 # 39

Methods


No methods listed for this paper. Add relevant methods here