The results show that integrating texture features leads to a more superior SLAM system that can match images across day and night.
We introduce M2DGR: a novel large-scale dataset collected by a ground robot with a full sensor-suite including six fish-eye and one sky-pointing RGB cameras, an infrared camera, an event camera, a Visual-Inertial Sensor (VI-sensor), an inertial measurement unit (IMU), a LiDAR, a consumer-grade Global Navigation Satellite System (GNSS) receiver and a GNSS-IMU navigation system with real-time kinematic (RTK) signals.
We address the problem of estimating the poses of multiple instances of the source point cloud within a target point cloud.
Inspired by the early works on indoor modeling, we leverage the structural regularities exhibited in indoor scenes, to train a better depth network.
This paper proposes a novel simultaneous localization and mapping (SLAM) approach, namely Attention-SLAM, which simulates human navigation mode by combining a visual saliency model (SalNavNet) with traditional monocular visual SLAM.
The experimental results show that the proposed method can surprisingly converge in a few iterations and achieve an accuracy of 91. 15% on a real IMU dataset, demonstrating the efficiency and effectiveness of the proposed method.
Instead of using Manhattan world assumption, we use Atlanta world model to describe such regularity.
We present a method to jointly estimate scene depth and recover the clear latent image from a foggy video sequence.