About

This is the task of image classification using representations learnt with self-supervised learning. Self-supervised methods generally involve a pretext task that is solved to learn a good representation and a loss function to learn with. One example of a loss function is an autoencoder based loss where the goal is reconstruction of an image pixel-by-pixel. A more popular recent example is a contrastive loss, which measure the similarity of sample pairs in a representation space, and where there can be a varying target instead of a fixed target to reconstruct (as in the case of autoencoders).

A common evaluation protocol is to train a linear classifier on top of (frozen) representations learnt by self-supervised methods. The leaderboards for the linear evaluation protocol can be found below. In practice, it is more common to fine-tune features on a downstream task. An alternative evaluation protocol therefore uses semi-supervised learning and finetunes on a % of the labels. The leaderboards for the finetuning protocol can be accessed here.

You may want to read some blog posts before reading the papers and checking the leaderboards:

There is also Yann LeCun's talk at AAAI-20 which you can watch here (35:00+).

( Image credit: A Simple Framework for Contrastive Learning of Visual Representations )

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Libraries

Latest papers without code

Mutual Contrastive Learning for Visual Representation Learning

26 Apr 2021

It is a generic framework that can be applied to both supervised and self-supervised representation learning.

FEW-SHOT LEARNING REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION TRANSFER LEARNING

Improving Auto-Encoders' self-supervised image classification using pseudo-labelling via data augmentation and the perceptual loss

6 Dec 2020

In this paper, we introduce a novel method to pseudo-label unlabelled images and train an Auto-Encoder to classify them in a self-supervised manner that allows for a high accuracy and consistency across several datasets.

DATA AUGMENTATION SELF-SUPERVISED IMAGE CLASSIFICATION UNSUPERVISED IMAGE CLASSIFICATION

Seed the Views: Hierarchical Semantic Alignment for Contrastive Representation Learning

4 Dec 2020

In this paper, we propose a hierarchical semantic alignment strategy via expanding the views generated by a single image to \textbf{Cross-samples and Multi-level} representation, and models the invariance to semantically similar images in a hierarchical way.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING

Boosting Contrastive Self-Supervised Learning with False Negative Cancellation

23 Nov 2020

Self-supervised representation learning has witnessed significant leaps fueled by recent progress in Contrastive learning, which seeks to learn transformations that embed positive input pairs nearby, while pushing negative pairs far apart.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING SEMI-SUPERVISED IMAGE CLASSIFICATION

A comparative study of semi- and self-supervised semantic segmentation of biomedical microscopy data

11 Nov 2020

In recent years, Convolutional Neural Networks (CNNs) have become the state-of-the-art method for biomedical image analysis.

SELF-SUPERVISED IMAGE CLASSIFICATION SEMANTIC SEGMENTATION

Self-supervised Visual Feature Learning with Deep Neural Networks: A Survey

16 Feb 2019

This paper provides an extensive review of deep learning-based self-supervised general visual feature learning methods from images or videos.

SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING