Iterative Few-shot Semantic Segmentation from Image Label Text

Few-shot semantic segmentation aims to learn to segment unseen class objects with the guidance of only a few support images. Most previous methods rely on the pixel-level label of support images. In this paper, we focus on a more challenging setting, in which only the image-level labels are available. We propose a general framework to firstly generate coarse masks with the help of the powerful vision-language model CLIP, and then iteratively and mutually refine the mask predictions of support and query images. Extensive experiments on PASCAL-5i and COCO-20i datasets demonstrate that our method not only outperforms the state-of-the-art weakly supervised approaches by a significant margin, but also achieves comparable or better results to recent supervised methods. Moreover, our method owns an excellent generalization ability for the images in the wild and uncommon classes. Code will be available at https://github.com/Whileherham/IMR-HSNet.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Semantic Segmentation COCO-20i (1-shot) IMR-HSNet (ResNet-50) Mean IoU 42.4 # 40
Few-Shot Semantic Segmentation COCO-20i (1-shot) IIMR-HSNet (VGG-16) Mean IoU 37.7 # 58
Few-Shot Semantic Segmentation COCO-20i (5-shot) IMR-HSNet (ResNet-50) Mean IoU 44.4 # 54
Few-Shot Semantic Segmentation PASCAL-5i (1-Shot) IMR-HSNet (ResNet-50) Mean IoU 61.1 # 69
Few-Shot Semantic Segmentation PASCAL-5i (1-Shot) IMR-HSNet (VGG-16) Mean IoU 56.5 # 87

Methods