PiCIE: Unsupervised Semantic Segmentation using Invariance and Equivariance in Clustering

We present a new framework for semantic segmentation without annotations via clustering. Off-the-shelf clustering methods are limited to curated, single-label, and object-centric images yet real-world data are dominantly uncurated, multi-label, and scene-centric... We extend clustering from images to pixels and assign separate cluster membership to different instances within each image. However, solely relying on pixel-wise feature similarity fails to learn high-level semantic concepts and overfits to low-level visual cues. We propose a method to incorporate geometric consistency as an inductive bias to learn invariance and equivariance for photometric and geometric variations. With our novel learning objective, our framework can learn high-level semantic concepts. Our method, PiCIE (Pixel-level feature Clustering using Invariance and Equivariance), is the first method capable of segmenting both things and stuff categories without any hyperparameter tuning or task-specific pre-processing. Our method largely outperforms existing baselines on COCO and Cityscapes with +17.5 Acc. and +4.5 mIoU. We show that PiCIE gives a better initialization for standard supervised training. The code is available at https://github.com/janghyuncho/PiCIE. read more

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Unsupervised Semantic Segmentation COCO-All PiCIE + H. mIoU 14.36 # 2
Pixel Accuracy 49.99 # 2
Unsupervised Semantic Segmentation COCO-All PiCIE mIoU 13.84 # 3
Pixel Accuracy 48.09 # 3
Unsupervised Semantic Segmentation COCO-Stuff PiCIE Pixel Accuracy 31.48 # 2


No methods listed for this paper. Add relevant methods here