SRVIO: Super Robust Visual Inertial Odometry for dynamic environments and challenging Loop-closure conditions

14 Jan 2022  ·  Ali Samadzadeh, Ahmad Nickabadi ·

There has been extensive research on visual localization and odometry for autonomous robots and virtual reality during the past decades. Traditionally, this problem has been solved with the help of expensive sensors, such as lidars. Nowadays, the focus of the leading research in this field is on robust localization using more economic sensors, such as cameras and IMUs. Consequently, geometric visual localization methods have become more accurate in time. However, these methods still suffer from significant loss and divergence in challenging environments, such as a room full of moving people. Scientists started using deep neural networks (DNNs) to mitigate this problem. The main idea behind using DNNs is to better understand challenging aspects of the data and overcome complex conditions such as the movement of a dynamic object in front of the camera that covers the full view of the camera, extreme lighting conditions, and high speed of the camera. Prior end-to-end DNN methods have overcome some of these challenges. However, no general and robust framework is available to overcome all challenges together. In this paper, we have combined geometric and DNN-based methods to have the generality and speed of geometric SLAM frameworks and overcome most of these challenging conditions with the help of DNNs and deliver the most robust framework so far. To do so, we have designed a framework based on Vins-Mono, and show that it is able to achieve state-of-the-art results on TUM-Dynamic, TUM-VI, ADVIO, and EuRoC datasets compared to geometric and end-to-end DNN based SLAMs. Our proposed framework could also achieve outstanding results on extreme simulated cases resembling the aforementioned challenges.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods