Semi-Supervised Image Classification

124 papers with code • 58 benchmarks • 13 datasets

Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.

You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:

( Image credit: Self-Supervised Semi-Supervised Learning )

Libraries

Use these libraries to find Semi-Supervised Image Classification models and implementations
7 papers
2,742
6 papers
1,355
See all 16 libraries.

Most implemented papers

A Simple Framework for Contrastive Learning of Visual Representations

google-research/simclr ICML 2020

This paper presents SimCLR: a simple framework for contrastive learning of visual representations.

mixup: Beyond Empirical Risk Minimization

facebookresearch/mixup-cifar10 ICLR 2018

We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.

Learning Transferable Visual Models From Natural Language Supervision

openai/CLIP 26 Feb 2021

State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories.

Improved Techniques for Training GANs

openai/improved-gan NeurIPS 2016

We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.

Bootstrap your own latent: A new approach to self-supervised Learning

deepmind/deepmind-research 13 Jun 2020

From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.

MixMatch: A Holistic Approach to Semi-Supervised Learning

google-research/mixmatch NeurIPS 2019

Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets.

Representation Learning with Contrastive Predictive Coding

davidtellez/contrastive-predictive-coding 10 Jul 2018

The key insight of our model is to learn such representations by predicting the future in latent space by using powerful autoregressive models.

Improved Regularization of Convolutional Neural Networks with Cutout

uoguelph-mlrg/Cutout 15 Aug 2017

Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks.

FixMatch: Simplifying Semi-Supervised Learning with Consistency and Confidence

google-research/fixmatch NeurIPS 2020

Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance.

Barlow Twins: Self-Supervised Learning via Redundancy Reduction

facebookresearch/barlowtwins 4 Mar 2021

This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.