Extract Free Dense Labels from CLIP

2 Dec 2021  ·  Chong Zhou, Chen Change Loy, Bo Dai ·

Contrastive Language-Image Pre-training (CLIP) has made a remarkable breakthrough in open-vocabulary zero-shot image recognition. Many recent studies leverage the pre-trained CLIP models for image-level classification and manipulation. In this paper, we wish examine the intrinsic potential of CLIP for pixel-level dense prediction, specifically in semantic segmentation. To this end, with minimal modification, we show that MaskCLIP yields compelling segmentation results on open concepts across various datasets in the absence of annotations and fine-tuning. By adding pseudo labeling and self-training, MaskCLIP+ surpasses SOTA transductive zero-shot semantic segmentation methods by large margins, e.g., mIoUs of unseen classes on PASCAL VOC/PASCAL Context/COCO Stuff are improved from 35.6/20.7/30.3 to 86.1/66.7/54.7. We also test the robustness of MaskCLIP under input corruption and evaluate its capability in discriminating fine-grained objects and novel concepts. Our finding suggests that MaskCLIP can serve as a new reliable source of supervision for dense prediction tasks to achieve annotation-free segmentation. Source code is available at https://github.com/chongzhou96/MaskCLIP.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Semantic Segmentation with Language-image Pre-training ADE20K MaskCLIP Mean IoU (val) 9.8 # 7
Open Vocabulary Panoptic Segmentation ADE20K MaskCLIP PQ 15.1 # 7
Zero Shot Segmentation ADE20K training-free zero-shot segmentation MaskCLIP mIoU 10.2 # 4
Semantic Segmentation CC3M-TagMask MaskCLIP mIoU 41.0 # 4
Unsupervised Semantic Segmentation with Language-image Pre-training Cityscapes val MaskCLIP mIoU 10.0 # 8
pixel accuracy 35.9 # 3
Unsupervised Semantic Segmentation with Language-image Pre-training COCO-Object MaskCLIP mIoU 20.6 # 7
Unsupervised Semantic Segmentation with Language-image Pre-training COCO-Stuff-171 MaskCLIP mIoU 16.4 # 5
Unsupervised Semantic Segmentation with Language-image Pre-training COCO-Stuff-27 DenseCLIP mIoU 19.6 # 4
pixel accuracy 32.2 # 3
Unsupervised Semantic Segmentation with Language-image Pre-training KITTI-STEP DenseCLIP mIoU 15.3 # 3
pixel accuracy 34.1 # 3
Unsupervised Semantic Segmentation with Language-image Pre-training PASCAL Context-59 MaskCLIP mIoU 26.4 # 5
Unsupervised Semantic Segmentation with Language-image Pre-training PASCAL VOC MaskCLIP mIoU 29.3 # 6
Unsupervised Semantic Segmentation with Language-image Pre-training PascalVOC-20 MaskCLIP mIoU 74.9 # 4

Methods