Visual Odometry
97 papers with code • 0 benchmarks • 22 datasets
Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.
Source: Bi-objective Optimization for Robust RGB-D Visual Odometry
Benchmarks
These leaderboards are used to track progress in Visual Odometry
Libraries
Use these libraries to find Visual Odometry models and implementationsDatasets
Latest papers
JPerceiver: Joint Perception Network for Depth, Pose and Layout Estimation in Driving Scenes
A naive way is to accomplish them independently in a sequential or parallel manner, but there are many drawbacks, i. e., 1) the depth and VO results suffer from the inherent scale ambiguity issue; 2) the BEV layout is directly predicted from the front-view image without using any depth-related information, although the depth map contains useful geometry clues for inferring scene layouts.
Physical Passive Patch Adversarial Attacks on Visual Odometry Systems
While such perturbations are usually discussed as tailored to a specific input, a universal perturbation can be constructed to alter the model's output on a set of inputs.
PVO: Panoptic Visual Odometry
We present PVO, a novel panoptic visual odometry framework to achieve more comprehensive modeling of the scene motion, geometry, and panoptic segmentation information.
Is Mapping Necessary for Realistic PointGoal Navigation?
However, for PointNav in a realistic setting (RGB-D and actuation noise, no GPS+Compass), this is an open question; one we tackle in this paper.
LF-VIO: A Visual-Inertial-Odometry Framework for Large Field-of-View Cameras with Negative Plane
To tackle this issue, we propose LF-VIO, a real-time VIO framework for cameras with extremely large FoV.
Mind the Gap! A Study on the Transferability of Virtual vs Physical-world Testing of Autonomous Driving Systems
In this paper, we shed light on the problem of generalizing testing results obtained in a driving simulator to a physical platform and provide a characterization and quantification of the sim2real gap affecting SDC testing.
360-DFPE: Leveraging Monocular 360-Layouts for Direct Floor Plan Estimation
We present 360-DFPE, a sequential floor plan estimation method that directly takes 360-images as input without relying on active sensors or 3D information.
TANDEM: Tracking and Dense Mapping in Real-time using Deep Multi-view Stereo
In this paper, we present TANDEM a real-time monocular tracking and dense mapping framework.
MotionHint: Self-Supervised Monocular Visual Odometry with Motion Constraints
A key aspect of our approach is to use an appropriate motion model that can help existing self-supervised monocular VO (SSM-VO) algorithms to overcome issues related to the local minima within their self-supervised loss functions.
Towards Robust Monocular Visual Odometry for Flying Robots on Planetary Missions
In contrast to most other approaches, our framework can also handle rotation-only motions that are particularly challenging for monocular odometry systems.