AtLoc: Attention Guided Camera Localization

8 Sep 2019  ·  Bing Wang, Changhao Chen, Chris Xiaoxuan Lu, Peijun Zhao, Niki Trigoni, Andrew Markham ·

Deep learning has achieved impressive results in camera localization, but current single-image techniques typically suffer from a lack of robustness, leading to large outliers. To some extent, this has been tackled by sequential (multi-images) or geometry constraint approaches, which can learn to reject dynamic objects and illumination conditions to achieve better performance. In this work, we show that attention can be used to force the network to focus on more geometrically robust objects and features, achieving state-of-the-art performance in common benchmark, even if using only a single image as input. Extensive experimental evidence is provided through public indoor and outdoor datasets. Through visualization of the saliency maps, we demonstrate how the network learns to reject dynamic objects, yielding superior global camera pose regression performance. The source code is avaliable at https://github.com/BingCS/AtLoc.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Localization Oxford Radar RobotCar (Full-6) AtLoc+ Mean Translation Error 17.92 # 7
Visual Localization Oxford RobotCar Full AtLoc+ Mean Translation Error 13.70 # 2
Visual Localization Oxford RobotCar Full AtLoc Mean Translation Error 29.6 # 6
Camera Localization Oxford RobotCar Full AtLoc+ Mean Translation Error 21.0 # 4
Camera Localization Oxford RobotCar Full AtLoc Mean Translation Error 29.6 # 6

Methods


No methods listed for this paper. Add relevant methods here