Self-Supervised Image Classification

43 papers with code • 2 benchmarks • 1 datasets

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )

Datasets


Latest papers with code

Masked Autoencoders Are Scalable Vision Learners

lucidrains/vit-pytorch 11 Nov 2021

Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.

Object Detection Self-Supervised Image Classification +2

6,944
11 Nov 2021

Self-Supervised Learning by Estimating Twin Class Distributions

bytedance/TWIST 14 Oct 2021

Different from the clustering-based methods which alternate between clustering and learning, our method is a single learning process guided by a unified loss function.

Fine-Grained Image Classification Representation Learning +5

58
14 Oct 2021

Weakly Supervised Contrastive Learning

KyleZheng1997/WCL ICCV 2021

Specifically, our proposed framework is based on two projection heads, one of which will perform the regular instance discrimination task.

Contrastive Learning Representation Learning +2

4
10 Oct 2021

ReSSL: Relational Self-Supervised Learning with Weak Augmentation

vturrisi/solo-learn NeurIPS 2021

Self-supervised Learning (SSL) including the mainstream contrastive learning has achieved great success in learning visual representations without data annotations.

Contrastive Learning Self-Supervised Image Classification +1

533
20 Jul 2021

XCiT: Cross-Covariance Image Transformers

rwightman/pytorch-image-models NeurIPS 2021

We propose a "transposed" version of self-attention that operates across feature channels rather than tokens, where the interactions are based on the cross-covariance matrix between keys and queries.

Instance Segmentation Object Detection +2

14,990
17 Jun 2021

Efficient Self-supervised Vision Transformers for Representation Learning

microsoft/esvit 17 Jun 2021

This paper investigates two techniques for developing efficient self-supervised vision transformers (EsViT) for visual representation learning.

Representation Learning Self-Supervised Image Classification

262
17 Jun 2021
6,244
10 May 2021

ResMLP: Feedforward networks for image classification with data-efficient training

rwightman/pytorch-image-models NeurIPS 2021

We present ResMLP, an architecture built entirely upon multi-layer perceptrons for image classification.

Ranked #6 on Image Classification on ImageNet V2 (using extra training data)

Classification Data Augmentation +5

14,990
07 May 2021

Emerging Properties in Self-Supervised Vision Transformers

lucidrains/vit-pytorch ICCV 2021

In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets).

Copy Detection Self-Supervised Image Classification +3

6,944
29 Apr 2021