PANet: Few-Shot Image Semantic Segmentation with Prototype Alignment

Despite the great progress made by deep CNNs in image semantic segmentation, they typically require a large number of densely-annotated images for training and are difficult to generalize to unseen object categories. Few-shot segmentation has thus been developed to learn to perform segmentation from only a few annotated examples. In this paper, we tackle the challenging few-shot segmentation problem from a metric learning perspective and present PANet, a novel prototype alignment network to better utilize the information of the support set. Our PANet learns class-specific prototype representations from a few support images within an embedding space and then performs segmentation over the query images through matching each pixel to the learned prototypes. With non-parametric metric learning, PANet offers high-quality prototypes that are representative for each semantic class and meanwhile discriminative for different classes. Moreover, PANet introduces a prototype alignment regularization between support and query. With this, PANet fully exploits knowledge from the support and provides better generalization on few-shot segmentation. Significantly, our model achieves the mIoU score of 48.1% and 55.7% on PASCAL-5i for 1-shot and 5-shot settings respectively, surpassing the state-of-the-art method by 1.8% and 8.6%.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Semantic Segmentation COCO-20i (1-shot) PANet (VGG-16) Mean IoU 20.9 # 78
FB-IoU 59.2 # 30
Few-Shot Semantic Segmentation COCO-20i (2-way 1-shot) PANet (ResNet-50) mIoU 18.0 # 6
Few-Shot Semantic Segmentation COCO-20i (5-shot) PANet (VGG-16) Mean IoU 29.7 # 73
FB-IoU 63.5 # 29
Few-Shot Semantic Segmentation PASCAL-5i (1-Shot) PANet (VGG-16) Mean IoU 48.1 # 98
FB-IoU 66.5 # 51
Few-Shot Semantic Segmentation PASCAL-5i (5-Shot) PANet (VGG-16) Mean IoU 55.7 # 89
FB-IoU 70.7 # 47

Methods