Self-Supervised Image Classification

84 papers with code • 2 benchmarks • 1 datasets

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )

Libraries

Use these libraries to find Self-Supervised Image Classification models and implementations
13 papers
2,708
12 papers
3,049
11 papers
3,208
See all 18 libraries.

Datasets


Most implemented papers

A Simple Framework for Contrastive Learning of Visual Representations

google-research/simclr ICML 2020

This paper presents SimCLR: a simple framework for contrastive learning of visual representations.

Masked Autoencoders Are Scalable Vision Learners

facebookresearch/mae CVPR 2022

Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.

Momentum Contrast for Unsupervised Visual Representation Learning

facebookresearch/moco CVPR 2020

This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.

Colorful Image Colorization

richzhang/colorization 28 Mar 2016

We embrace the underlying uncertainty of the problem by posing it as a classification task and use class-rebalancing at training time to increase the diversity of colors in the result.

Improved Baselines with Momentum Contrastive Learning

facebookresearch/moco 9 Mar 2020

Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.

Bootstrap your own latent: A new approach to self-supervised Learning

deepmind/deepmind-research 13 Jun 2020

From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.

Representation Learning with Contrastive Predictive Coding

davidtellez/contrastive-predictive-coding 10 Jul 2018

The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models.

Exploring Simple Siamese Representation Learning

facebookresearch/simsiam CVPR 2021

Our experiments show that collapsing solutions do exist for the loss and structure, but a stop-gradient operation plays an essential role in preventing collapsing.

Emerging Properties in Self-Supervised Vision Transformers

facebookresearch/dino ICCV 2021

In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets).

Barlow Twins: Self-Supervised Learning via Redundancy Reduction

facebookresearch/barlowtwins 4 Mar 2021

This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.