ReCo: Retrieve and Co-segment for Zero-shot Transfer

14 Jun 2022  ยท  Gyungin Shin, Weidi Xie, Samuel Albanie ยท

Semantic segmentation has a broad range of applications, but its real-world impact has been significantly limited by the prohibitive annotation costs necessary to enable deployment. Segmentation methods that forgo supervision can side-step these costs, but exhibit the inconvenient requirement to provide labelled examples from the target distribution to assign concept names to predictions. An alternative line of work in language-image pre-training has recently demonstrated the potential to produce models that can both assign names across large vocabularies of concepts and enable zero-shot transfer for classification, but do not demonstrate commensurate segmentation abilities. In this work, we strive to achieve a synthesis of these two approaches that combines their strengths. We leverage the retrieval abilities of one such language-image pre-trained model, CLIP, to dynamically curate training sets from unlabelled images for arbitrary collections of concept names, and leverage the robust correspondences offered by modern image representations to co-segment entities among the resulting collections. The synthetic segment collections are then employed to construct a segmentation model (without requiring pixel labels) whose knowledge of concepts is inherited from the scalable pre-training process of CLIP. We demonstrate that our approach, termed Retrieve and Co-segment (ReCo) performs favourably to unsupervised segmentation approaches while inheriting the convenience of nameable predictions and zero-shot transfer. We also demonstrate ReCo's ability to generate specialist segmenters for extremely rare objects.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Semantic Segmentation with Language-image Pre-training ADE20K ReCo Mean IoU (val) 11.2 # 6
Unsupervised Semantic Segmentation with Language-image Pre-training Cityscapes val ReCo mIoU 19.3 # 6
pixel accuracy 74.6 # 2
Unsupervised Semantic Segmentation with Language-image Pre-training Cityscapes val ReCo+ mIoU 24.2 # 4
pixel accuracy 83.7 # 1
Unsupervised Semantic Segmentation with Language-image Pre-training COCO-Object ReCo mIoU 15.7 # 8
Unsupervised Semantic Segmentation with Language-image Pre-training COCO-Stuff-171 ReCo mIoU 14.8 # 6
Unsupervised Semantic Segmentation with Language-image Pre-training COCO-Stuff-27 ReCo+ mIoU 32.6 # 1
pixel accuracy 54.1 # 1
Unsupervised Semantic Segmentation with Language-image Pre-training COCO-Stuff-27 ReCo mIoU 26.3 # 3
pixel accuracy 46.1 # 2
Unsupervised Semantic Segmentation with Language-image Pre-training KITTI-STEP ReCo+ mIoU 31.9 # 1
pixel accuracy 75.3 # 1
Unsupervised Semantic Segmentation with Language-image Pre-training KITTI-STEP ReCo mIoU 29.8 # 2
pixel accuracy 70.6 # 2
Unsupervised Semantic Segmentation with Language-image Pre-training PASCAL Context-59 ReCo mIoU 22.3 # 7
Unsupervised Semantic Segmentation with Language-image Pre-training PascalVOC-20 ReCo mIoU 57.7 # 5

Methods