Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.
( Image credit: Self-Supervised Semi-Supervised Learning )
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework.
#8 best model for Conditional Image Generation on CIFAR-10
We introduce a simple semi-supervised learning approach for images based on in-painting using an adversarial loss.
#5 best model for Semi-Supervised Image Classification on STL-10, 1000 Labels
We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.
#7 best model for Semi-Supervised Image Classification on SVHN, 250 Labels
In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning.
Without changing the network architecture, Mean Teacher achieves an error rate of 4. 35% on SVHN with 250 labels, outperforming Temporal Ensembling trained with 1000 labels.
#4 best model for Semi-Supervised Image Classification on SVHN, 250 Labels
Semi-supervised learning has proven to be a powerful paradigm for leveraging unlabeled data to mitigate the reliance on large labeled datasets.
#2 best model for Image Classification on STL-10
In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets.
#5 best model for Semi-Supervised Image Classification on cifar10, 250 Labels
We combine supervised learning with unsupervised learning in deep neural networks.
#15 best model for Semi-Supervised Image Classification on CIFAR-10, 4000 Labels
The method is not specialised to computer vision and operates on any paired dataset samples; in our experiments we use random transforms to obtain a pair from each image.
Semi-supervised learning (SSL) provides an effective means of leveraging unlabeled data to improve a model's performance.