Visual Place Recognition
102 papers with code • 27 benchmarks • 19 datasets
Visual Place Recognition is the task of matching a view of a place with a different view of the same place taken at a different time.
Source: Visual place recognition using landmark distribution descriptors
Image credit: Visual place recognition using landmark distribution descriptors
Libraries
Use these libraries to find Visual Place Recognition models and implementationsDatasets
Latest papers with no code
On the Estimation of Image-matching Uncertainty in Visual Place Recognition
In Visual Place Recognition (VPR) the pose of a query image is estimated by comparing the image to a map of reference images with known reference poses.
NYC-Indoor-VPR: A Long-Term Indoor Visual Place Recognition Dataset with Semi-Automatic Annotation
Visual Place Recognition (VPR) in indoor environments is beneficial to humans and robots for better localization and navigation.
Enhancing Visual Place Recognition via Fast and Slow Adaptive Biasing in Event Cameras
Event cameras are increasingly popular in robotics due to their beneficial features, such as low latency, energy efficiency, and high dynamic range.
Local positional graphs and attentive local features for a data and runtime-efficient hierarchical place recognition pipeline
This paper proposes a runtime and data-efficient hierarchical VPR pipeline that extends existing approaches and presents novel ideas.
VDNA-PR: Using General Dataset Representations for Robust Sequential Visual Place Recognition
Two parallel lines of work on VPR have shown, on one side, that general-purpose off-the-shelf feature representations can provide robustness to domain shifts, and, on the other, that fused information from sequences of images improves performance.
NeRF-Supervised Feature Point Detection and Description
Feature point detection and description is the backbone for various computer vision applications, such as Structure-from-Motion, visual SLAM, and visual place recognition.
BEV2PR: BEV-Enhanced Visual Place Recognition with Structural Cues
To tackle the above issues, we design a new BEV-enhanced VPR framework, nemely BEV2PR, which can generate a composite descriptor with both visual cues and spatial awareness solely based on a single camera.
Spike-EVPR: Deep Spiking Residual Network with Cross-Representation Aggregation for Event-Based Visual Place Recognition
This module is designed to extract features shared between the two representations and features specific to each.
Regressing Transformers for Data-efficient Visual Place Recognition
Visual place recognition is a critical task in computer vision, especially for localization and navigation systems.
PlaceFormer: Transformer-based Visual Place Recognition using Multi-Scale Patch Selection and Fusion
To re-rank the retrieved images, PlaceFormer merges the patch tokens from the transformer to form multi-scale patches.