D3Feat: Joint Learning of Dense Detection and Description of 3D Local Features

A successful point cloud registration often lies on robust establishment of sparse matches through discriminative 3D local features. Despite the fast evolution of learning-based 3D feature descriptors, little attention has been drawn to the learning of 3D feature detectors, even less for a joint learning of the two tasks... In this paper, we leverage a 3D fully convolutional network for 3D point clouds, and propose a novel and practical learning mechanism that densely predicts both a detection score and a description feature for each 3D point. In particular, we propose a keypoint selection strategy that overcomes the inherent density variations of 3D point clouds, and further propose a self-supervised detector loss guided by the on-the-fly feature matching results during training. Finally, our method achieves state-of-the-art results in both indoor and outdoor scenarios, evaluated on 3DMatch and KITTI datasets, and shows its strong generalization ability on the ETH dataset. Towards practical use, we show that by adopting a reliable feature detector, sampling a smaller number of features is sufficient to achieve accurate and fast point cloud alignment. [code release](https://github.com/XuyangBai/D3Feat) read more

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Point Cloud Registration 3DMatch Benchmark D3Feat-Pred Recall 95.8 # 5
Point Cloud Registration 3DMatch Benchmark D3Feat-rand Recall 95.3 # 6
Point Cloud Registration 3DMatch (trained on KITTI) D3Feat-pred Recall 0.627 # 3
Point Cloud Registration ETH (trained on 3DMatch) D3Feat-pred Recall 0.563 # 6
Point Cloud Registration KITTI D3Feat-pred Success Rate 99.81 # 2
Point Cloud Registration KITTI (trained on 3DMatch) D3Feat-pred Success Rate 36.76 # 4

Methods


No methods listed for this paper. Add relevant methods here