Semi-Supervised Image Classification

124 papers with code • 58 benchmarks • 13 datasets

Semi-supervised image classification leverages unlabelled data as well as labelled data to increase classification performance.

You may want to read some blog posts to get an overview before reading the papers and checking the leaderboards:

( Image credit: Self-Supervised Semi-Supervised Learning )

Libraries

Use these libraries to find Semi-Supervised Image Classification models and implementations
7 papers
2,743
6 papers
1,355
See all 16 libraries.

Latest papers with no code

Pseudo-label Learning with Calibrated Confidence Using an Energy-based Model

no code yet • 15 Apr 2024

In pseudo-labeling (PL), which is a type of semi-supervised learning, pseudo-labels are assigned based on the confidence scores provided by the classifier; therefore, accurate confidence is important for successful PL.

Color-$S^{4}L$: Self-supervised Semi-supervised Learning with Image Colorization

no code yet • 8 Jan 2024

This work addresses the problem of semi-supervised image classification tasks with the integration of several effective self-supervised pretext tasks.

How To Overcome Confirmation Bias in Semi-Supervised Image Classification By Active Learning

no code yet • 16 Aug 2023

We conduct experiments with SSL and AL on simulated data challenges and find that random sampling does not mitigate confirmation bias and, in some cases, leads to worse performance than supervised learning.

Graph Convolutional Networks based on Manifold Learning for Semi-Supervised Image Classification

no code yet • 24 Apr 2023

In spite of many advances, most of the approaches require a large amount of labeled data, which is often not available, due to costs and difficulties of manual labeling processes.

Semi-MAE: Masked Autoencoders for Semi-supervised Vision Transformers

no code yet • 4 Jan 2023

To alleviate this issue, inspired by masked autoencoder (MAE), which is a data-efficient self-supervised learner, we propose Semi-MAE, a pure ViT-based SSL framework consisting of a parallel MAE branch to assist the visual representation learning and make the pseudo labels more accurate.

Self Meta Pseudo Labels: Meta Pseudo Labels Without The Teacher

no code yet • 27 Dec 2022

We present Self Meta Pseudo Labels, a novel semi-supervised learning method similar to Meta Pseudo Labels but without the teacher model.

Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated Learning Framework

no code yet • 3 Dec 2022

In this paper, we firstly reveal the fact that the federated ADMM is essentially a client-variance-reduced algorithm.

Contrastive Regularization for Semi-Supervised Learning

no code yet • 17 Jan 2022

Consistency regularization on label predictions becomes a fundamental technique in semi-supervised learning, but it still requires a large number of training iterations for high performance.

Pushing the limits of self-supervised ResNets: Can we outperform supervised learning without labels on ImageNet?

no code yet • 13 Jan 2022

Most notably, ReLICv2 is the first unsupervised representation learning method to consistently outperform the supervised baseline in a like-for-like comparison over a range of ResNet architectures.

Towards Discovering the Effectiveness of Moderately Confident Samples for Semi-Supervised Learning

no code yet • CVPR 2022

To answer these problems, we propose a novel Taylor expansion inspired filtration (TEIF) framework, which admits the samples of moderate confidence with similar feature or gradient to the respective one averaged over the labeled and highly confident unlabeled data.