About

Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.

You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:

( Image credit: Self-Supervised Semi-Supervised Learning )

Benchmarks

TREND DATASET BEST METHOD PAPER TITLE PAPER CODE COMPARE

Datasets

Greatest papers with code

Improved Techniques for Training GANs

NeurIPS 2016 tensorflow/models

We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.

CONDITIONAL IMAGE GENERATION SEMI-SUPERVISED IMAGE CLASSIFICATION

Meta Pseudo Labels

23 Mar 2020google-research/google-research

We present Meta Pseudo Labels, a semi-supervised learning method that achieves a new state-of-the-art top-1 accuracy of 90. 2% on ImageNet, which is 1. 6% better than the existing state-of-the-art.

META-LEARNING SEMI-SUPERVISED IMAGE CLASSIFICATION

Milking CowMask for Semi-Supervised Image Classification

26 Mar 2020google-research/google-research

Using it to provide perturbations for semi-supervised consistency regularization, we achieve a state-of-the-art result on ImageNet with 10% labeled data, with a top-5 error of 8. 76% and top-1 error of 26. 06%.

SEMI-SUPERVISED IMAGE CLASSIFICATION

Semi-Supervised Learning with Context-Conditional Generative Adversarial Networks

19 Nov 2016eriklindernoren/PyTorch-GAN

We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss.

SEMI-SUPERVISED IMAGE CLASSIFICATION

mixup: Beyond Empirical Risk Minimization

ICLR 2018 rwightman/pytorch-image-models

We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.

DOMAIN GENERALIZATION SEMI-SUPERVISED IMAGE CLASSIFICATION

Bootstrap your own latent: A new approach to self-supervised Learning

13 Jun 2020deepmind/deepmind-research

From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.

REPRESENTATION LEARNING SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING SEMI-SUPERVISED IMAGE CLASSIFICATION

Big Self-Supervised Models are Strong Semi-Supervised Learners

NeurIPS 2020 google-research/simclr

The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.

SELF-SUPERVISED IMAGE CLASSIFICATION SEMI-SUPERVISED IMAGE CLASSIFICATION

Unsupervised Data Augmentation for Consistency Training

NeurIPS 2020 google-research/uda

In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.

IMAGE AUGMENTATION SEMI-SUPERVISED IMAGE CLASSIFICATION TEXT CLASSIFICATION TRANSFER LEARNING

Self-supervised Pretraining of Visual Features in the Wild

2 Mar 2021facebookresearch/vissl

Recently, self-supervised learning methods like MoCo, SimCLR, BYOL and SwAV have reduced the gap with supervised methods.

 Ranked #1 on Self-Supervised Image Classification on ImageNet (finetuned) (using extra training data)

SELF-SUPERVISED IMAGE CLASSIFICATION SELF-SUPERVISED LEARNING SEMI-SUPERVISED IMAGE CLASSIFICATION