About

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Libraries

Greatest papers with code

On Mutual Information Maximization for Representation Learning

ICLR 2020 google-research/google-research

Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION

Bootstrap your own latent: A new approach to self-supervised Learning

13 Jun 2020deepmind/deepmind-research

From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING SEMI-SUPERVISED IMAGE CLASSIFICATION

Emerging Properties in Self-Supervised Vision Transformers

29 Apr 2021lucidrains/vit-pytorch

In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets).

COPY DETECTION SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING SEMANTIC SEGMENTATION VIDEO OBJECT DETECTION

Colorful Image Colorization

28 Mar 2016richzhang/colorization

We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result.

COLORIZATION SELF-SUPERVISED IMAGE CLASSIFICATION

Improved Baselines with Momentum Contrastive Learning

9 Mar 2020facebookresearch/moco

Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.

DATA AUGMENTATION REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION

Big Self-Supervised Models are Strong Semi-Supervised Learners

NeurIPS 2020 google-research/simclr

The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.

SELF-SUPERVISED IMAGE CLASSIFICATION SEMI-SUPERVISED IMAGE CLASSIFICATION

Barlow Twins: Self-Supervised Learning via Redundancy Reduction

4 Mar 2021facebookresearch/vissl

This causes the representation vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.

CLASSIFICATION OBJECT DETECTION SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING SEMI-SUPERVISED IMAGE CLASSIFICATION

Self-supervised Pretraining of Visual Features in the Wild

2 Mar 2021facebookresearch/vissl

Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have reduced the gap with supervised methods.

 Ranked #1 on Self-Supervised Image Classification on ImageNet (finetuned) (using extra training data)

SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING SEMI-SUPERVISED IMAGE CLASSIFICATION