CMT-DeepLab: Clustering Mask Transformers for Panoptic Segmentation

We propose Clustering Mask Transformer (CMT-DeepLab), a transformer-based framework for panoptic segmentation designed around clustering. It rethinks the existing transformer architectures used in segmentation and detection; CMT-DeepLab considers the object queries as cluster centers, which fill the role of grouping the pixels when applied to segmentation. The clustering is computed with an alternating procedure, by first assigning pixels to the clusters by their feature affinity, and then updating the cluster centers and pixel features. Together, these operations comprise the Clustering Mask Transformer (CMT) layer, which produces cross-attention that is denser and more consistent with the final segmentation task. CMT-DeepLab improves the performance over prior art significantly by 4.4% PQ, achieving a new state-of-the-art of 55.7% PQ on the COCO test-dev set.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Panoptic Segmentation Cityscapes val CMT-DeepLab (MaX-S, single-scale, IN-1K) PQ 64.6 # 16
mIoU 81.4 # 17
Panoptic Segmentation COCO minival CMT-DeepLab (single-scale) PQ 55.3 # 16
PQth 61.0 # 12
PQst 46.6 # 12
Panoptic Segmentation COCO test-dev CMT-DeepLab (single-scale) PQ 55.7 # 6
PQst 46.8 # 5
PQth 61.6 # 5

Methods