InfoSeg: Unsupervised Semantic Image Segmentation with Mutual Information Maximization

7 Oct 2021  ·  Robert Harb, Patrick Knöbelreiter ·

We propose a novel method for unsupervised semantic image segmentation based on mutual information maximization between local and global high-level image features. The core idea of our work is to leverage recent progress in self-supervised image representation learning. Representation learning methods compute a single high-level feature capturing an entire image. In contrast, we compute multiple high-level features, each capturing image segments of one particular semantic class. To this end, we propose a novel two-step learning procedure comprising a segmentation and a mutual information maximization step. In the first step, we segment images based on local and global features. In the second step, we maximize the mutual information between local features and high-level features of their respective class. For training, we provide solely unlabeled images and start from random network initialization. For quantitative and qualitative evaluation, we use established benchmarks, and COCO-Persons, whereby we introduce the latter in this paper as a challenging novel benchmark. InfoSeg significantly outperforms the current state-of-the-art, e.g., we achieve a relative increase of 26% in the Pixel Accuracy metric on the COCO-Stuff dataset.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Semantic Segmentation COCO-Persons InfoSeg Pixel Accuracy 69.6 # 1
Unsupervised Semantic Segmentation COCO-Stuff-15 InfoSeg Pixel Accuracy 38.8 # 1
Unsupervised Semantic Segmentation COCO-Stuff-3 InfoSeg Pixel Accuracy 73.8 # 3
Unsupervised Semantic Segmentation Potsdam-3 InfoSeg Pixel Accuracy 71.6 # 1


No methods listed for this paper. Add relevant methods here