SpinNet: Learning a General Surface Descriptor for 3D Point Cloud Registration

Extracting robust and general 3D local features is key to downstream tasks such as point cloud registration and reconstruction. Existing learning-based local descriptors are either sensitive to rotation transformations, or rely on classical handcrafted features which are neither general nor representative. In this paper, we introduce a new, yet conceptually simple, neural architecture, termed SpinNet, to extract local features which are rotationally invariant whilst sufficiently informative to enable accurate registration. A Spatial Point Transformer is first introduced to map the input local surface into a carefully designed cylindrical space, enabling end-to-end optimization with SO(2) equivariant representation. A Neural Feature Extractor which leverages the powerful point-based and 3D cylindrical convolutional neural layers is then utilized to derive a compact and representative descriptor for matching. Extensive experiments on both indoor and outdoor datasets demonstrate that SpinNet outperforms existing state-of-the-art techniques by a large margin. More critically, it has the best generalization ability across unseen scenarios with different sensor modalities. The code is available at https://github.com/QingyongHu/SpinNet.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Point Cloud Registration 3DMatch Benchmark SpinNet (no code published as of Dec 15 2020) Feature Matching Recall 97.6 # 4
Point Cloud Registration 3DMatch (trained on KITTI) SpinNet Recall 0.845 # 2
Point Cloud Registration ETH (trained on 3DMatch) SpinNet Recall 0.928 # 2
Point Cloud Registration FAUST-partial (60%+ overlap, Rot 0-45, Trans -50-50, trained on 3DMatch) SpinNet Recall (%) 42.46 # 5
RRE (degrees) 3.105 # 3
RTE (cm) 1.670 # 4
Point Cloud Registration KITTI SpinNet Success Rate 99.10 # 3
Point Cloud Registration KITTI (trained on 3DMatch) SpinNet Success Rate 81.44 # 4

Methods