65 papers with code • 34 benchmarks • 9 datasets
Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:
( Image credit: Self-Supervised Semi-Supervised Learning )
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.
Ranked #11 on Conditional Image Generation on CIFAR-10
Using it to provide perturbations for semi-supervised consistency regularization, we achieve a state-of-the-art result on ImageNet with 10% labeled data, with a top-5 error of 8. 76% and top-1 error of 26. 06%.
We present Meta Pseudo Labels, a semi-supervised learning method that achieves a new state-of-the-art top-1 accuracy of 90. 2% on ImageNet, which is 1. 6% better than the existing state-of-the-art.
We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
Ranked #7 on Domain Generalization on ImageNet-A
We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss.
From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.
The proposed semi-supervised learning algorithm can be summarized in three steps: unsupervised pretraining of a big ResNet model using SimCLRv2, supervised fine-tuning on a few labeled examples, and distillation with unlabeled examples for refining and transferring the task-specific knowledge.
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks.
Ranked #3 on Semi-Supervised Image Classification on STL-10
This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.
Ranked #1 on Image Classification on Places205