Self-Supervised Image Classification

43 papers with code • 2 benchmarks • 1 datasets

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )


Latest papers without code

iBOT: Image BERT Pre-Training with Online Tokenizer

no code yet • 15 Nov 2021

We present a self-supervised framework iBOT that can perform masked prediction with an online tokenizer.

 Ranked #1 on Self-Supervised Image Classification on ImageNet (using extra training data)

Fine-tuning Instance Segmentation +4

Compressive Visual Representations

no code yet • NeurIPS 2021

We verify this by developing SimCLR and BYOL formulations compatible with the Conditional Entropy Bottleneck (CEB) objective, allowing us to both measure and control the amount of compression in the learned representation, and observe their impact on downstream tasks.

Contrastive Learning Self-Supervised Image Classification

Large-Scale Unsupervised Person Re-Identification with Contrastive Learning

no code yet • 17 May 2021

In particular, most existing unsupervised and domain adaptation ReID methods utilize only the public datasets in their experiments, with labels removed.

Contrastive Learning Domain Adaptation +4

Mutual Contrastive Learning for Visual Representation Learning

no code yet • 26 Apr 2021

It is a generic framework that can be applied to both supervised and self-supervised representation learning.

Contrastive Learning Few-Shot Learning +3

Self-supervised Pre-training with Hard Examples Improves Visual Representations

no code yet • 25 Dec 2020

Self-supervised pre-training (SSP) employs random image transformations to generate training data for visual representation learning.

Data Augmentation Fine-tuning +2

A Pseudo-labelling Auto-Encoder for unsupervised image classification

no code yet • 6 Dec 2020

In this paper, we introduce a unique variant of the denoising Auto-Encoder and combine it with the perceptual loss to classify images in an unsupervised manner.

Classification Data Augmentation +3

Seed the Views: Hierarchical Semantic Alignment for Contrastive Representation Learning

no code yet • 4 Dec 2020

In this paper, we propose a hierarchical semantic alignment strategy via expanding the views generated by a single image to \textbf{Cross-samples and Multi-level} representation, and models the invariance to semantically similar images in a hierarchical way.

Contrastive Learning Representation Learning +2

A comparative study of semi- and self-supervised semantic segmentation of biomedical microscopy data

no code yet • 11 Nov 2020

In recent years, Convolutional Neural Networks (CNNs) have become the state-of-the-art method for biomedical image analysis.

Self-Supervised Image Classification Semantic Segmentation

Representation Learning via Invariant Causal Mechanisms

no code yet • 15 Oct 2020

Self-supervised learning has emerged as a strategy to reduce the reliance on costly supervised signal by pretraining representations only using unlabeled data.

Contrastive Learning Representation Learning +2

Consensus Clustering With Unsupervised Representation Learning

no code yet • 3 Oct 2020

Recent advances in deep clustering and unsupervised representation learning are based on the idea that different views of an input image (generated through data augmentation techniques) must either be closer in the representation space, or have a similar cluster assignment.

Data Augmentation Deep Clustering +3