We propose the concept of a multi-frame GAN (MFGAN) and demonstrate its potential as an image sequence enhancement for stereo visual odometry in low light conditions.
In this letter, a stereo-based multi-motion visual odometry method is proposed to acquire the poses of the robot and other moving objects.
In this paper, we extend the recently developed continuous visual odometry framework for RGB-D cameras to an adaptive framework via online hyperparameter learning.
As for the pose tracker, we propose a visual odometry system fusing both the feature matching and the virtual LiDAR scan matching results.
In this work, we propose a monocular semi-direct visual odometry framework, which is capable of exploiting the best attributes of edge features and local photometric information for illumination-robust camera motion estimation and scene reconstruction.
The hallucination network is taught to predict fake visual features from thermal images by using the robust Huber loss.
Based on rigid projective geometry, the estimated stereo depth is used to guide the camera motion estimation, and the depth and camera motion are used to guide the residual flow estimation.