We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph.
Human visual scene understanding is so remarkable that we are able to recognize a revisited place when entering it from the opposite direction it was first visited, even in the presence of extreme variations in appearance.
In this paper we present an end-to-end deep learning framework to turn images that show dynamic content, such as vehicles or pedestrians, into realistic static frames.
The paper presents an approach to indoor personal localization on a mobile device based on visual place recognition.
State-of-the-art algorithms for visual place recognition, and related visual navigation systems, can be broadly split into two categories: computer-science-oriented models including deep learning or image retrieval-based techniques with minimal biological plausibility, and neuroscience-oriented dynamical networks that model temporal properties underlying spatial navigation in the brain.
In recent years, there has been a rapid increase in the number of service robots deployed for aiding people in their daily activities.
This paper presents a cognition-inspired agnostic framework for building a map for Visual Place Recognition.
CityLearn features over 10 benchmark real-world datasets often used in place recognition research with more than 100 recorded traversals and across 60 cities around the world.
This paper proposes a novel visual place recognition algorithm, termed TextPlace, based on scene texts in the wild.