Self-Supervised Learning of Object Parts for Semantic Segmentation

CVPR 2022  ·  Adrian Ziegler, Yuki M. Asano ·

Progress in self-supervised learning has brought strong general image representation learning methods. Yet so far, it has mostly focused on image-level learning. In turn, tasks such as unsupervised image segmentation have not benefited from this trend as they require spatially-diverse representations. However, learning dense representations is challenging, as in the unsupervised context it is not clear how to guide the model to learn representations that correspond to various potential object categories. In this paper, we argue that self-supervised learning of object parts is a solution to this issue. Object parts are generalizable: they are a priori independent of an object definition, but can be grouped to form objects a posteriori. To this end, we leverage the recently proposed Vision Transformer's capability of attending to objects and combine it with a spatially dense clustering task for fine-tuning the spatial tokens. Our method surpasses the state-of-the-art on three semantic segmentation benchmarks by 17%-3%, showing that our representations are versatile under various object definitions. Finally, we extend this to fully unsupervised segmentation - which refrains completely from using label information even at test-time - and demonstrate that a simple method for automatically merging discovered object parts based on community detection yields substantial gains.

PDF Abstract CVPR 2022 PDF CVPR 2022 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Unsupervised Semantic Segmentation PASCAL VOC 2012 val Leopart (ViT-B/8) Clustering [mIoU] 47.2 # 5
FCN [mIoU] 76.3 # 1
Unsupervised Semantic Segmentation PASCAL VOC 2012 val Leopart (ViT-S/16) Linear Classifier [mIoU] 69.3 # 1
Clustering [mIoU] 41.7 # 8
FCN [mIoU] 71.4 # 2

Methods


No methods listed for this paper. Add relevant methods here