Semi-Supervised Image Classification

99 papers with code • 37 benchmarks • 12 datasets

Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.

You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:

( Image credit: Self-Supervised Semi-Supervised Learning )


Use these libraries to find Semi-Supervised Image Classification models and implementations
5 papers
3 papers
See all 12 libraries.

Most implemented papers

mixup: Beyond Empirical Risk Minimization

facebookresearch/mixup-cifar10 ICLR 2018

We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.

A Simple Framework for Contrastive Learning of Visual Representations

google-research/simclr ICML 2020

This paper presents SimCLR: a simple framework for contrastive learning of visual representations.

Improved Techniques for Training GANs

openai/improved-gan NeurIPS 2016

We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.

MixMatch: A Holistic Approach to Semi-Supervised Learning

google-research/mixmatch NeurIPS 2019

Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets.

Bootstrap your own latent: A new approach to self-supervised Learning

deepmind/deepmind-research 13 Jun 2020

From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.

Improved Regularization of Convolutional Neural Networks with Cutout

uoguelph-mlrg/Cutout 15 Aug 2017

Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks.

Representation Learning with Contrastive Predictive Coding

davidtellez/contrastive-predictive-coding 10 Jul 2018

The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models.

FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence

google-research/fixmatch NeurIPS 2020

Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance.

Learning Transferable Visual Models From Natural Language Supervision

openai/CLIP 26 Feb 2021

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories.

Unsupervised Data Augmentation for Consistency Training

google-research/uda NeurIPS 2020

In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.