Self-Supervised Learning of Pretext-Invariant Representations

CVPR 2020  ·  Ishan Misra, Laurens van der Maaten ·

The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images. Many pretext tasks lead to representations that are covariant with image transformations. We argue that, instead, semantic representations ought to be invariant under such transformations. Specifically, we develop Pretext-Invariant Representation Learning (PIRL, pronounced as "pearl") that learns invariant representations based on pretext tasks. We use PIRL with a commonly used pretext task that involves solving jigsaw puzzles. We find that PIRL substantially improves the semantic quality of the learned image representations. Our approach sets a new state-of-the-art in self-supervised learning from images on several popular benchmarks for self-supervised learning. Despite being unsupervised, PIRL outperforms supervised pre-training in learning image representations for object detection. Altogether, our results demonstrate the potential of self-supervised learning of image representations with good invariance properties.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Self-Supervised Image Classification ImageNet PIRL Top 1 Accuracy 63.6% # 99
Number of Params 24M # 40
Semi-Supervised Image Classification ImageNet - 10% labeled data PIRL (ResNet-50) Top 5 Accuracy 83.8% # 32
Contrastive Learning imagenet-1k ResNet50 ImageNet Top-1 Accuracy 63.6 # 7