Points to Patches: Enabling the Use of Self-Attention for 3D Shape Recognition

8 Apr 2022  ·  Axel Berg, Magnus Oskarsson, Mark O'Connor ·

While the Transformer architecture has become ubiquitous in the machine learning field, its adaptation to 3D shape recognition is non-trivial. Due to its quadratic computational complexity, the self-attention operator quickly becomes inefficient as the set of input points grows larger. Furthermore, we find that the attention mechanism struggles to find useful connections between individual points on a global scale. In order to alleviate these problems, we propose a two-stage Point Transformer-in-Transformer (Point-TnT) approach which combines local and global attention mechanisms, enabling both individual points and patches of points to attend to each other effectively. Experiments on shape classification show that such an approach provides more useful features for downstream tasks than the baseline Transformer, while also being more computationally efficient. In addition, we also extend our method to feature matching for scene reconstruction, showing that it can be used in conjunction with existing scene reconstruction pipelines.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Point Cloud Registration 3DMatch Benchmark DIP + Point-TnT Feature Matching Recall 96.8 # 6
3D Point Cloud Classification ModelNet40 Point-TnT Overall Accuracy 92.6 # 73
Number of params 3.9M # 95
3D Point Cloud Classification ScanObjectNN Point-TnT Overall Accuracy 83.5 # 46
Mean Accuracy 81.0 # 19
Number of params 3.9M # 56
FLOPs 1.19G # 1

Methods