Visual Odometry
99 papers with code • 1 benchmarks • 23 datasets
Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.
Source: Bi-objective Optimization for Robust RGB-D Visual Odometry
Libraries
Use these libraries to find Visual Odometry models and implementationsDatasets
Latest papers
MotionHint: Self-Supervised Monocular Visual Odometry with Motion Constraints
A key aspect of our approach is to use an appropriate motion model that can help existing self-supervised monocular VO (SSM-VO) algorithms to overcome issues related to the local minima within their self-supervised loss functions.
Towards Robust Monocular Visual Odometry for Flying Robots on Planetary Missions
In contrast to most other approaches, our framework can also handle rotation-only motions that are particularly challenging for monocular odometry systems.
Recalibrating the KITTI Dataset Camera Setup for Improved Odometry Accuracy
In this paper, we propose a new approach for one shot calibration of the KITTI dataset multiple camera setup.
Instant Visual Odometry Initialization for Mobile AR
However, standard visual odometry or SLAM algorithms require motion parallax to initialize (see Figure 1) and, therefore, suffer from delayed initialization.
RAM-VO: Less is more in Visual Odometry
Building vehicles capable of operating without human supervision requires the determination of the agent's pose.
VOLDOR-SLAM: For the Times When Feature-Based or Direct Methods Are Not Good Enough
We present a dense-indirect SLAM system using external dense optical flows as input.
VOLDOR: Visual Odometry from Log-logistic Dense Optical flow Residuals
We propose a dense indirect visual odometry method taking as input externally estimated optical flow fields instead of hand-crafted feature correspondences.
Spatiotemporal Registration for Event-based Visual Odometry
The state-of-the-art method of contrast maximisation recovers the motion from a batch of events by maximising the contrast of the image of warped events.
DF-VO: What Should Be Learnt for Visual Odometry?
More surprisingly, they show that the well-trained networks enable scale-consistent predictions over long videos, while the accuracy is still inferior to traditional methods because of ignoring geometric information.
OmniDet: Surround View Cameras based Multi-task Visual Perception Network for Autonomous Driving
We obtain the state-of-the-art results on KITTI for depth estimation and pose estimation tasks and competitive performance on the other tasks.