SG-One: Similarity Guidance Network for One-Shot Semantic Segmentation

22 Oct 2018  ·  Xiaolin Zhang, Yunchao Wei, Yi Yang, Thomas Huang ·

One-shot image semantic segmentation poses a challenging task of recognizing the object regions from unseen categories with only one annotated example as supervision. In this paper, we propose a simple yet effective Similarity Guidance network to tackle the One-shot (SG-One) segmentation problem. We aim at predicting the segmentation mask of a query image with the reference to one densely labeled support image of the same category. To obtain the robust representative feature of the support image, we firstly adopt a masked average pooling strategy for producing the guidance features by only taking the pixels belonging to the support image into account. We then leverage the cosine similarity to build the relationship between the guidance features and features of pixels from the query image. In this way, the possibilities embedded in the produced similarity maps can be adapted to guide the process of segmenting objects. Furthermore, our SG-One is a unified framework which can efficiently process both support and query images within one network and be learned in an end-to-end manner. We conduct extensive experiments on Pascal VOC 2012. In particular, our SGOne achieves the mIoU score of 46.3%, surpassing the baseline methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few-Shot Semantic Segmentation PASCAL-5i (5-Shot) SG-One (VGG-16) Mean IoU 47.1 # 89

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Few-Shot Semantic Segmentation PASCAL-5i (1-Shot) SG-One (VGG-16) Mean IoU 46.3 # 97

Methods