Mask3D: Mask Transformer for 3D Semantic Instance Segmentation

6 Oct 2022  ·  Jonas Schult, Francis Engelmann, Alexander Hermans, Or Litany, Siyu Tang, Bastian Leibe ·

Modern 3D semantic instance segmentation approaches predominantly rely on specialized voting mechanisms followed by carefully designed geometric clustering techniques. Building on the successes of recent Transformer-based methods for object detection and image segmentation, we propose the first Transformer-based approach for 3D semantic instance segmentation. We show that we can leverage generic Transformer building blocks to directly predict instance masks from 3D point clouds. In our model called Mask3D each object instance is represented as an instance query. Using Transformer decoders, the instance queries are learned by iteratively attending to point cloud features at multiple scales. Combined with point features, the instance queries directly yield all instance masks in parallel. Mask3D has several advantages over current state-of-the-art approaches, since it neither relies on (1) voting schemes which require hand-selected geometric properties (such as centers) nor (2) geometric grouping mechanisms requiring manually-tuned hyper-parameters (e.g. radii) and (3) enables a loss that directly optimizes instance masks. Mask3D sets a new state-of-the-art on ScanNet test (+6.2 mAP), S3DIS 6-fold (+10.1 mAP), STPLS3D (+11.2 mAP) and ScanNet200 test (+12.4 mAP).

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
3D Instance Segmentation S3DIS Mask3D AP@50 75.5 # 2
mAP 64.5 # 1
3D Instance Segmentation ScanNet200 Mask3D mAP 27.8 # 1
3D Instance Segmentation ScanNet(v2) Mask3D mAP 55.2 # 5
mAP @ 50 78.0 # 3
mAP@25 87.0 # 4
3D Instance Segmentation STPLS3D Mask3D AP50 74.3 # 1
AP25 81.6 # 1
AP 57.3 # 1

Methods