Noisy Student Training is a semi-supervised learning approach. It extends the idea of self-training and distillation with the use of equal-or-larger student models and noise added to the student during learning. It has three main steps:
The algorithm is iterated a few times by treating the student as a teacher to relabel the unlabeled data and training a new student.
Noisy Student Training seeks to improve on self-training and distillation in two ways. First, it makes the student larger than, or at least equal to, the teacher so the student can better learn from a larger dataset. Second, it adds noise to the student so the noised student is forced to learn harder from the pseudo labels. To noise the student, it uses input noise such as RandAugment data augmentation, and model noise such as dropout and stochastic depth during training.
Source: Self-training with Noisy Student improves ImageNet classificationPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 6 | 10.17% |
Automatic Speech Recognition (ASR) | 5 | 8.47% |
Speech Recognition | 5 | 8.47% |
Test | 4 | 6.78% |
Self-Supervised Learning | 3 | 5.08% |
Pseudo Label | 3 | 5.08% |
Classification | 2 | 3.39% |
Computed Tomography (CT) | 2 | 3.39% |
General Classification | 2 | 3.39% |
Component | Type |
|
---|---|---|
![]() |
Regularization | |
![]() |
Image Data Augmentation | |
![]() |
Regularization |