SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud based Place Recognition

We tackle the problem of place recognition from point cloud data and introduce a self-attention and orientation encoding network (SOE-Net) that fully explores the relationship between points and incorporates long-range context into point-wise local descriptors. Local information of each point from eight orientations is captured in a PointOE module, whereas long-range feature dependencies among local descriptors are captured with a self-attention unit. Moreover, we propose a novel loss function called Hard Positive Hard Negative quadruplet loss (HPHN quadruplet), that achieves better performance than the commonly used metric learning loss. Experiments on various benchmark datasets demonstrate superior performance of the proposed network over the current state-of-the-art approaches. Our code is released publicly at https://github.com/Yan-Xia/SOE-Net.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
3D Place Recognition Oxford RobotCar Dataset soe-net AR@1% 96.4 # 5
Point Cloud Retrieval Oxford RobotCar (LiDAR 4096 points) SOE-Net (refined) recall@top1% 96.43 # 14
recall@top1 89.28 # 12
Point Cloud Retrieval Oxford RobotCar (LiDAR 4096 points) SOE-Net (baseline) recall@top1% 96.4 # 15

Methods


No methods listed for this paper. Add relevant methods here