Point-cloud-based place recognition using CNN feature extraction

23 Oct 2018  ·  Ting Sun, Ming Liu, Haoyang Ye, Dit-yan Yeung ·

This paper proposes a novel point-cloud-based place recognition system that adopts a deep learning approach for feature extraction. By using a convolutional neural network pre-trained on color images to extract features from a range image without fine-tuning on extra range images, significant improvement has been observed when compared to using hand-crafted features. The resulting system is illumination invariant, rotation invariant and robust against moving objects that are unrelated to the place identity. Apart from the system itself, we also bring to the community a new place recognition dataset containing both point cloud and grayscale images covering a full $360^\circ$ environmental view. In addition, the dataset is organized in such a way that it facilitates experimental validation with respect to rotation invariance or robustness against unrelated moving objects separately.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here