Visual Odometry
96 papers with code • 0 benchmarks • 21 datasets
Visual Odometry is an important area of information fusion in which the central aim is to estimate the pose of a robot using data collected by visual sensors.
Source: Bi-objective Optimization for Robust RGB-D Visual Odometry
Benchmarks
These leaderboards are used to track progress in Visual Odometry
Libraries
Use these libraries to find Visual Odometry models and implementationsDatasets
Latest papers
VBR: A Vision Benchmark in Rome
This paper presents a vision and perception research dataset collected in Rome, featuring RGB data, 3D point clouds, IMU, and GPS data.
VOLoc: Visual Place Recognition by Querying Compressed Lidar Map
Then the QPC is compressed by the same GPC, and is aggregated into a global descriptor by an attention-based aggregation module, to query the compressed Lidar map in the vector space.
YOLOPoint Joint Keypoint and Object Detection
Intelligent vehicles of the future must be capable of understanding and navigating safely through their surroundings.
Event-Based Visual Odometry on Non-Holonomic Ground Vehicles
Despite the promise of superior performance under challenging conditions, event-based motion estimation remains a hard problem owing to the difficulty of extracting and tracking stable features from event streams.
Amirkabir campus dataset: Real-world challenges and scenarios of Visual Inertial Odometry (VIO) for visually impaired people
Visual Inertial Odometry (VIO) algorithms estimate the accurate camera trajectory by using camera and Inertial Measurement Unit (IMU) sensors.
SR-LIVO: LiDAR-Inertial-Visual Odometry and Mapping with Sweep Reconstruction
Existing LiDAR-inertial-visual odometry and mapping (LIV-SLAM) systems mainly utilize the LiDAR-inertial odometry (LIO) module for structure reconstruction and the visual-inertial odometry (VIO) module for color rendering.
Loss it right: Euclidean and Riemannian Metrics in Learning-based Visual Odometry
This paper overviews different pose representations and metric functions in visual odometry (VO) networks.
Deep Event Visual Odometry
To remove the dependency on additional sensors and to push the limits of using only a single event camera, we present Deep Event VO (DEVO), the first monocular event-only system with strong performance on a large number of real-world benchmarks.
Converting Depth Images and Point Clouds for Feature-based Pose Estimation
Compared to Bearing Angle images, our method yields brighter, higher-contrast images with more visible contours and more details.
Transformer-based model for monocular visual odometry: a video understanding approach
In this work, we deal with the monocular visual odometry as a video understanding task to estimate the 6-DoF camera's pose.