Semi-Supervised Learning Methods

Gradual Self-Training

Introduced by Kumar et al. in Understanding Self-Training for Gradual Domain Adaptation

Gradual self-training is a method for semi-supervised domain adaptation. The goal is to adapt an initial classifier trained on a source domain given only unlabeled data that shifts gradually in distribution towards a target domain.

This comes up for example in applications ranging from sensor networks and self-driving car perception modules to brain-machine interfaces, where machine learning systems must adapt to data distributions that evolve over time.

The gradual self-training algorithm begins with a classifier $w_0$ trained on labeled examples from the source domain (Figure a). For each successive domain $P_t$, the algorithm generates pseudolabels for unlabeled examples from that domain, and then trains a regularized supervised classifier on the pseudolabeled examples. The intuition, visualized in the Figure, is that after a single gradual shift, most examples are pseudolabeled correctly so self-training learns a good classifier on the shifted data, but the shift from the source to the target can be too large for self-training to correct.

Source: Understanding Self-Training for Gradual Domain Adaptation

Papers


Paper Code Results Date Stars

Components


Component Type
🤖 No Components Found You can add them if they exist; e.g. Mask R-CNN uses RoIAlign

Categories