Dense Cross-Query-and-Support Attention Weighted Mask Aggregation for Few-Shot Segmentation

18 Jul 2022  ·  Xinyu Shi, Dong Wei, Yu Zhang, Donghuan Lu, Munan Ning, Jiashun Chen, Kai Ma, Yefeng Zheng ·

Research into Few-shot Semantic Segmentation (FSS) has attracted great attention, with the goal to segment target objects in a query image given only a few annotated support images of the target class. A key to this challenging task is to fully utilize the information in the support images by exploiting fine-grained correlations between the query and support images. However, most existing approaches either compressed the support information into a few class-wise prototypes, or used partial support information (e.g., only foreground) at the pixel level, causing non-negligible information loss. In this paper, we propose Dense pixel-wise Cross-query-and-support Attention weighted Mask Aggregation (DCAMA), where both foreground and background support information are fully exploited via multi-level pixel-wise correlations between paired query and support features. Implemented with the scaled dot-product attention in the Transformer architecture, DCAMA treats every query pixel as a token, computes its similarities with all support pixels, and predicts its segmentation label as an additive aggregation of all the support pixels' labels -- weighted by the similarities. Based on the unique formulation of DCAMA, we further propose efficient and effective one-pass inference for n-shot segmentation, where pixels of all support images are collected for the mask aggregation at once. Experiments show that our DCAMA significantly advances the state of the art on standard FSS benchmarks of PASCAL-5i, COCO-20i, and FSS-1000, e.g., with 3.1%, 9.7%, and 3.6% absolute improvements in 1-shot mIoU over previous best records. Ablative studies also verify the design DCAMA.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Semantic Segmentation COCO-20i (1-shot) DCAMA (Swin-B) Mean IoU 50.9 # 4
FB-IoU 73.2 # 3
Few-Shot Semantic Segmentation COCO-20i (1-shot) DCAMA (ResNet-50) Mean IoU 43.3 # 34
FB-IoU 69.5 # 18
Few-Shot Semantic Segmentation COCO-20i (1-shot) DCAMA (ResNet-101) Mean IoU 43.5 # 33
FB-IoU 69.9 # 13
Few-Shot Semantic Segmentation COCO-20i (5-shot) DCAMA (ResNet-101) Mean IoU 51.9 # 19
FB-IoU 73.3 # 9
Few-Shot Semantic Segmentation COCO-20i (5-shot) DCAMA (Swin-B) Mean IoU 58.3 # 4
FB-IoU 76.9 # 3
Few-Shot Semantic Segmentation COCO-20i (5-shot) DCAMA (ResNet-50) Mean IoU 48.3 # 38
FB-IoU 71.7 # 20
Few-Shot Semantic Segmentation FSS-1000 (1-shot) DCAMA (Swin-B) Mean IoU 90.1 # 6
FB-IoU 93.8 # 2
Few-Shot Semantic Segmentation FSS-1000 (1-shot) DCAMA (ResNet-50) Mean IoU 88.2 # 12
FB-IoU 92.5 # 4
Few-Shot Semantic Segmentation FSS-1000 (1-shot) DCAMA (ResNet-101) Mean IoU 88.3 # 11
FB-IoU 92.4 # 5
Few-Shot Semantic Segmentation FSS-1000 (5-shot) DCAMA (ResNet-50) Mean IoU 88.8 # 11
FB-IoU 92.9 # 5
Few-Shot Semantic Segmentation FSS-1000 (5-shot) DCAMA (Swin-B) Mean IoU 90.4 # 7
FB-IoU 94.1 # 3
Few-Shot Semantic Segmentation FSS-1000 (5-shot) DCAMA (ResNet-101) Mean IoU 89.1 # 10
FB-IoU 93.1 # 4
Few-Shot Semantic Segmentation PASCAL-5i (1-Shot) DCAMA (Swin-B) Mean IoU 69.3 # 8
FB-IoU 78.5 # 17
Few-Shot Semantic Segmentation PASCAL-5i (1-Shot) DCAMA (ResNet-101) FB-IoU 77.6 # 27
Few-Shot Semantic Segmentation PASCAL-5i (1-Shot) DCAMA (ResNet-50) Mean IoU 64.6 # 50
FB-IoU 75.7 # 35
Few-Shot Semantic Segmentation PASCAL-5i (5-Shot) DCAMA (Swin-B) Mean IoU 74.9 # 6
FB-IoU 82.9 # 8
Few-Shot Semantic Segmentation PASCAL-5i (5-Shot) DCAMA (ResNet-101) Mean IoU 68.3 # 49
FB-IoU 80.8 # 22
Few-Shot Semantic Segmentation PASCAL-5i (5-Shot) DCAMA (ResNet-50) Mean IoU 68.5 # 47
FB-IoU 79.5 # 29

Methods