LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis

Point cloud based place recognition is still an open issue due to the difficulty in extracting local features from the raw 3D point cloud and generating the global descriptor, and it's even harder in the large-scale dynamic environments. In this paper, we develop a novel deep neural network, named LPD-Net (Large-scale Place Description Network), which can extract discriminative and generalizable global descriptors from the raw 3D point cloud. Two modules, the adaptive local feature extraction module and the graph-based neighborhood aggregation module, are proposed, which contribute to extract the local structures and reveal the spatial distribution of local features in the large-scale point cloud, with an end-to-end manner. We implement the proposed global descriptor in solving point cloud based retrieval tasks to achieve the large-scale place recognition. Comparison results show that our LPD-Net is much better than PointNetVLAD and reaches the state-of-the-art. We also compare our LPD-Net with the vision-based solutions to show the robustness of our approach to different weather and light conditions.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Place Recognition CS-Campus3D LPD-Net AR@1% 59.49 # 5
AR@1 45.94 # 5
AR@1% cross-source 40.70 # 7
AR@1 cross-source 11.99 # 7
3D Place Recognition Oxford RobotCar Dataset LPD-Net AR@1 86.3 # 5
AR@1% 94.9 # 6
Point Cloud Retrieval Oxford RobotCar (LiDAR 4096 points) LPD-Net (baseline) recall@top1% 94.92 # 16
recall@top1 86.28 # 15

Methods


No methods listed for this paper. Add relevant methods here