Visual Odometry
97 papers with code • 0 benchmarks • 21 datasets
Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.
Source: Bi-objective Optimization for Robust RGB-D Visual Odometry
Benchmarks
These leaderboards are used to track progress in Visual Odometry
Libraries
Use these libraries to find Visual Odometry models and implementationsDatasets
Latest papers with no code
Brain-Inspired Visual Odometry: Balancing Speed and Interpretability through a System of Systems Approach
In this study, we address the critical challenge of balancing speed and accuracy while maintaining interpretablity in visual odometry (VO) systems, a pivotal aspect in the field of autonomous navigation and robotics.
Trajectory Approximation of Video Based on Phase Correlation for Forward Facing Camera
Subsequently, we introduce a novel chain code method termed "dynamic chain code," which is based on the x-shift values derived from the phase correlation.
NeRF-VO: Real-Time Sparse Visual Odometry with Neural Radiance Fields
We introduce a novel monocular visual odometry (VO) system, NeRF-VO, that integrates learning-based sparse visual odometry for low-latency camera tracking and a neural radiance scene representation for sophisticated dense reconstruction and novel view synthesis.
Ternary-type Opacity and Hybrid Odometry for RGB-only NeRF-SLAM
To foster this line of research, we also propose a simple yet novel visual odometry scheme that uses a hybrid combination of volumetric and warping-based image renderings.
SuperPrimitive: Scene Reconstruction at a Primitive Level
We address this issue with a new image representation which we call a SuperPrimitive.
iMatching: Imperative Correspondence Learning
Learning feature correspondence is a foundational task in computer vision, holding immense importance for downstream applications such as visual odometry and 3D reconstruction.
Dense Visual Odometry Using Genetic Algorithm
To evaluate our method, we use the root mean square error to compare it with the based energy method and another metaheuristic method.
Inertial Guided Uncertainty Estimation of Feature Correspondence in Visual-Inertial Odometry/SLAM
Visual odometry and Simultaneous Localization And Mapping (SLAM) has been studied as one of the most important tasks in the areas of computer vision and robotics, to contribute to autonomous navigation and augmented reality systems.
Resilient Simultaneous Localization and Mapping Fusing Ultra Wide Band Range Measurements and Visual Odometry
The solution approach is based on a switching observer which, under standard working conditions, for each observed UWB, uses a two dimensional Extended Kalman Filter (EKF) providing an estimate of the range and bearing of the observed UWB with respect to the agent.
Jointly Optimized Global-Local Visual Localization of UAVs
Our GLVL network is a two-stage visual localization approach, combining a large-scale retrieval module that finds similar regions with the UAV flight scene, and a fine-grained matching module that localizes the precise UAV coordinate, enabling real-time and precise localization.