Visual Place Recognition
102 papers with code • 27 benchmarks • 20 datasets
Visual Place Recognition is the task of matching a view of a place with a different view of the same place taken at a different time.
Source: Visual place recognition using landmark distribution descriptors
Image credit: Visual place recognition using landmark distribution descriptors
Libraries
Use these libraries to find Visual Place Recognition models and implementationsDatasets
Most implemented papers
Large scale visual place recognition with sub-linear storage growth
Robotic and animal mapping systems share many of the same objectives and challenges, but differ in one key aspect: where much of the research in robotic mapping has focused on solving the data association problem, the grid cell neurons underlying maps in the mammalian brain appear to intentionally break data association by encoding many locations with a single grid cell neuron.
A Holistic Visual Place Recognition Approach using Lightweight CNNs for Significant ViewPoint and Appearance Changes
This paper presents a lightweight visual place recognition approach, capable of achieving high performance with low computational cost, and feasible for mobile robotics under significant viewpoint and appearance changes.
Collaborative Dense SLAM
In this paper, we present a new system for live collaborative dense surface reconstruction.
PCAN: 3D Attention Map Learning Using Contextual Information for Point Cloud Based Retrieval
Point cloud based retrieval for place recognition is an emerging problem in vision field.
DEDUCE: Diverse scEne Detection methods in Unseen Challenging Environments
In recent years, there has been a rapid increase in the number of service robots deployed for aiding people in their daily activities.
TextPlace: Visual Place Recognition and Topological Localization Through Reading Scene Texts
This paper proposes a novel visual place recognition algorithm, termed TextPlace, based on scene texts in the wild.
CityLearn: Diverse Real-World Environments for Sample-Efficient Navigation Policy Learning
While deep reinforcement learning has shown success in solving these perception and decision-making problems in an end-to-end manner, these algorithms require large amounts of experience to learn navigation policies from high-dimensional data, which is generally impractical for real robots due to sample complexity.
A Hybrid Compact Neural Architecture for Visual Place Recognition
State-of-the-art algorithms for visual place recognition, and related visual navigation systems, can be broadly split into two categories: computer-science-oriented models including deep learning or image retrieval-based techniques with minimal biological plausibility, and neuroscience-oriented dynamical networks that model temporal properties underlying spatial navigation in the brain.
Fast, Compact and Highly Scalable Visual Place Recognition through Sequence-based Matching of Overloaded Representations
Visual place recognition algorithms trade off three key characteristics: their storage footprint, their computational requirements, and their resultant performance, often expressed in terms of recall rate.
Hierarchical Multi-Process Fusion for Visual Place Recognition
In this paper we present a novel, hierarchical localization system that explicitly benefits from three varying characteristics of localization techniques: the distribution of their localization hypotheses, their appearance- and viewpoint-invariant properties, and the resulting differences in where in an environment each system works well and fails.