PCAN: 3D Attention Map Learning Using Contextual Information for Point Cloud Based Retrieval

CVPR 2019  ·  Wenxiao Zhang, Chunxia Xiao ·

Point cloud based retrieval for place recognition is an emerging problem in vision field. The main challenge is how to find an efficient way to encode the local features into a discriminative global descriptor. In this paper, we propose a Point Contextual Attention Network (PCAN), which can predict the significance of each local point feature based on point context. Our network makes it possible to pay more attention to the task-relevent features when aggregating local features. Experiments on various benchmark datasets show that the proposed network can provide outperformance than current state-of-the-art approaches.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Place Recognition Oxford RobotCar Dataset PCAN AR@1% 83.8 # 9
Point Cloud Retrieval Oxford RobotCar (LiDAR 4096 points) PCAN (refined) recall@top1% 86.4 # 20
recall@top1 70.72 # 18
Point Cloud Retrieval Oxford RobotCar (LiDAR 4096 points) PCAN (baseline) recall@top1% 83.81 # 22
recall@top1 69.05 # 19

Methods


No methods listed for this paper. Add relevant methods here