Browse > Computer Vision > Image Classification > Self-Supervised Image Classification

Self-Supervised Image Classification

10 papers with code · Computer Vision

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )

Leaderboards

Latest papers with code

Improved Baselines with Momentum Contrastive Learning

9 Mar 2020facebookresearch/moco

Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.

DATA AUGMENTATION REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION

677
09 Mar 2020

Self-Supervised Learning of Pretext-Invariant Representations

4 Dec 2019akwasigroch/Pretext-Invariant-Representations

The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images.

OBJECT DETECTION REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SEMI-SUPERVISED IMAGE CLASSIFICATION

12
04 Dec 2019

Momentum Contrast for Unsupervised Visual Representation Learning

13 Nov 2019facebookresearch/moco

This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION

677
13 Nov 2019

On Mutual Information Maximization for Representation Learning

ICLR 2020 google-research/google-research

Many recent methods for unsupervised or self-supervised representation learning train feature extractors by maximizing an estimate of the mutual information (MI) between different views of the data.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION

9,093
31 Jul 2019

Large Scale Adversarial Representation Learning

NeurIPS 2019 LEGO999/BigBiGAN-TensorFlow2.0

We extensively evaluate the representation learning and generation capabilities of these BigBiGAN models, demonstrating that these generation-based models achieve the state of the art in unsupervised representation learning on ImageNet, as well as in unconditional image generation.

 SOTA for Image Generation on ImageNet 64x64 (Inception Score metric )

IMAGE GENERATION SELF-SUPERVISED IMAGE CLASSIFICATION SEMI-SUPERVISED IMAGE CLASSIFICATION UNSUPERVISED REPRESENTATION LEARNING

5
04 Jul 2019

Contrastive Multiview Coding

13 Jun 2019HobbitLong/CMC

We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics.

OBJECT CLASSIFICATION SELF-SUPERVISED IMAGE CLASSIFICATION

654
13 Jun 2019

Learning Representations by Maximizing Mutual Information Across Views

NeurIPS 2019 Philip-Bachman/amdim-public

Following our proposed approach, we develop a model which learns image representations that significantly outperform prior methods on the tasks we consider.

DATA AUGMENTATION REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION

250
03 Jun 2019
256
25 Jan 2019

Representation Learning with Contrastive Predictive Coding

10 Jul 2018davidtellez/contrastive-predictive-coding

The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SEMI-SUPERVISED IMAGE CLASSIFICATION

255
10 Jul 2018