Robust Monocular Edge Visual Odometry through Coarse-to-Fine Data Association

25 Sep 2019  ·  Xiaolong Wu, Patricio Vela, Cedric Pradalier ·

In this work, we propose a monocular visual odometry framework, which allows exploiting the best attributes of edge feature for illumination-robust camera tracking, while at the same time ameliorating the performance degradation of edge mapping. In the front-end, an ICP-based edge registration can provide robust motion estimation and coarse data association under lighting changes. In the back-end, a novel edge-guided data association pipeline searches for the best photometrically matched points along geometrically possible edges through template matching, so that the matches can be further refined in later bundle adjustment. The core of our proposed data association strategy lies in a point-to-edge geometric uncertainty analysis, which analytically derives (1) the probabilistic search length formula that significantly reduces the search space for system speed-up and (2) the geometrical confidence metric for mapping degradation detection based on the predicted depth uncertainty. Moreover, match confidence based patch size adaption strategy is integrated into our pipeline, connecting with other components, to reduce the matching ambiguity. We present extensive analysis and evaluation of our proposed system on synthetic and real-world benchmark datasets under the influence of illumination changes and large camera motions, where our proposed system outperforms current state-of-art algorithms.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here