Cross-View Training for Semi-Supervised Learning

ICLR 2018  ·  Kevin Clark, Thang Luong, Quoc V. Le ·

We present Cross-View Training (CVT), a simple but effective method for deep semi-supervised learning. On labeled examples, the model is trained with standard cross-entropy loss. On an unlabeled example, the model first performs inference (acting as a "teacher") to produce soft targets. The model then learns from these soft targets (acting as a ``"student"). We deviate from prior work by adding multiple auxiliary student prediction layers to the model. The input to each student layer is a sub-network of the full model that has a restricted view of the input (e.g., only seeing one region of an image). The students can learn from the teacher (the full model) because the teacher sees more of each example. Concurrently, the students improve the quality of the representations used by the teacher as they learn to make predictions with limited data. When combined with Virtual Adversarial Training, CVT improves upon the current state-of-the-art on semi-supervised CIFAR-10 and semi-supervised SVHN. We also apply CVT to train models on five natural language processing tasks using hundreds of millions of sentences of unlabeled data. On all tasks CVT substantially outperforms supervised learning alone, resulting in models that improve upon or are competitive with the current state-of-the-art.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


Ranked #4 on Chunking on CoNLL 2000 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Chunking CoNLL 2000 ELMo + Multi-Task Exact Span F1 96.83 # 5
Chunking CoNLL 2000 CVT+Multi-Task+Large Exact Span F1 96.98 # 4

Methods


No methods listed for this paper. Add relevant methods here