Position-Guided Point Cloud Panoptic Segmentation Transformer

23 Mar 2023  ยท  Zeqi Xiao, Wenwei Zhang, Tai Wang, Chen Change Loy, Dahua Lin, Jiangmiao Pang ยท

DEtection TRansformer (DETR) started a trend that uses a group of learnable queries for unified visual perception. This work begins by applying this appealing paradigm to LiDAR-based point cloud segmentation and obtains a simple yet effective baseline. Although the naive adaptation obtains fair results, the instance segmentation performance is noticeably inferior to previous works. By diving into the details, we observe that instances in the sparse point clouds are relatively small to the whole scene and often have similar geometry but lack distinctive appearance for segmentation, which are rare in the image domain. Considering instances in 3D are more featured by their positional information, we emphasize their roles during the modeling and design a robust Mixed-parameterized Positional Embedding (MPE) to guide the segmentation process. It is embedded into backbone features and later guides the mask prediction and query update processes iteratively, leading to Position-Aware Segmentation (PA-Seg) and Masked Focal Attention (MFA). All these designs impel the queries to attend to specific regions and identify various instances. The method, named Position-guided Point cloud Panoptic segmentation transFormer (P3Former), outperforms previous state-of-the-art methods by 3.4% and 1.2% PQ on SemanticKITTI and nuScenes benchmark, respectively. The source code and models are available at https://github.com/SmartBot-PJLab/P3Former .

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Panoptic Segmentation SemanticKITTI P3Former PQ 0.649 # 1
PQ_dagger 0.7 # 1
RQ 0.759 # 1
SQ 0.849 # 1
PQth 0.671 # 1
RQth 0.741 # 1
SQth 0.906 # 1
PQst 0.633 # 1
RQst 0.772 # 1
SQst 0.807 # 1
mIoU 0.683 # 1

Methods


No methods listed for this paper. Add relevant methods here