RaP-Net: A Region-wise and Point-wise Weighting Network to Extract Robust Features for Indoor Localization

Feature extraction plays an important role in visual localization. Unreliable features on dynamic objects or repetitive regions will interfere with feature matching and challenge indoor localization greatly. To address the problem, we propose a novel network, RaP-Net, to simultaneously predict region-wise invariability and point-wise reliability, and then extract features by considering both of them. We also introduce a new dataset, named OpenLORIS-Location, to train the proposed network. The dataset contains 1553 images from 93 indoor locations. Various appearance changes between images of the same location are included and can help the model to learn the invariability in typical indoor scenes. Experimental results show that the proposed RaP-Net trained with OpenLORIS-Location dataset achieves excellent performance in the feature matching task and significantly outperforms state-of-the-arts feature algorithms in indoor localization. The RaP-Net code and dataset are available at https://github.com/ivipsourcecode/RaP-Net.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here