LIMO: Lidar-Monocular Visual Odometry

19 Jul 2018  ·  Johannes Graeter, Alexander Wilczynski, Martin Lauer ·

Higher level functionality in autonomous driving depends strongly on a precise motion estimate of the vehicle. Powerful algorithms have been developed. However, their great majority focuses on either binocular imagery or pure LIDAR measurements. The promising combination of camera and LIDAR for visual localization has mostly been unattended. In this work we fill this gap, by proposing a depth extraction algorithm from LIDAR measurements for camera feature tracks and estimating motion by robustified keyframe based Bundle Adjustment. Semantic labeling is used for outlier rejection and weighting of vegetation landmarks. The capability of this sensor combination is demonstrated on the competitive KITTI dataset, achieving a placement among the top 15. The code is released to the community.

PDF Abstract

Categories


Robotics Image and Video Processing

Datasets


  Add Datasets introduced or used in this paper