Machine Learning in Appearance-based Robot Self-localization

17 Jun 2017  ·  Alexander Kuleshov, Alexander Bernstein, Evgeny Burnaev, Yury Yanovich ·

An appearance-based robot self-localization problem is considered in the machine learning framework. The appearance space is composed of all possible images, which can be captured by a robot's visual system under all robot localizations. Using recent manifold learning and deep learning techniques, we propose a new geometrically motivated solution based on training data consisting of a finite set of images captured in known locations of the robot. The solution includes estimation of the robot localization mapping from the appearance space to the robot localization space, as well as estimation of the inverse mapping for modeling visual image features. The latter allows solving the robot localization problem as the Kalman filtering problem.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here