Unsupervised Data Augmentation for Consistency Training

Semi-supervised learning lately has shown much promise in improving deep learning models when labeled data is scarce. Common among recent approaches is the use of consistency training on a large amount of unlabeled data to constrain model predictions to be invariant to input noise. In this work, we present a new perspective on how to effectively noise unlabeled examples and argue that the quality of noising, specifically those produced by advanced data augmentation methods, plays a crucial role in semi-supervised learning. By substituting simple noising operations with advanced data augmentation methods such as RandAugment and back-translation, our method brings substantial improvements across six language and three vision tasks under the same consistency training framework. On the IMDb text classification dataset, with only 20 labeled examples, our method achieves an error rate of 4.20, outperforming the state-of-the-art model trained on 25,000 labeled examples. On a standard semi-supervised learning benchmark, CIFAR-10, our method outperforms all previous approaches and achieves an error rate of 5.43 with only 250 examples. Our method also combines well with transfer learning, e.g., when finetuning from BERT, and yields improvements in high-data regime, such as ImageNet, whether when there is only 10% labeled data or when a full labeled set with 1.3M extra unlabeled examples is used. Code is available at https://github.com/google-research/uda.

PDF Abstract NeurIPS 2020 PDF NeurIPS 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Text Classification Amazon-2 BERT Finetune + UDA Error 3.5 # 2
Text Classification Amazon-5 BERT Finetune + UDA Error 37.12 # 3
Sentiment Analysis Amazon Review Full BERT large Accuracy 65.83 # 1
Sentiment Analysis Amazon Review Full BERT large finetune UDA Accuracy 62.88 # 4
Sentiment Analysis Amazon Review Polarity BERT large finetune UDA Accuracy 96.5 # 3
Sentiment Analysis Amazon Review Polarity BERT large Accuracy 97.37 # 1
Semi-Supervised Image Classification CIFAR-10, 4000 Labels UDA Percentage error 5.27 # 21
Text Classification DBpedia BERT large Error 0.68 # 3
Text Classification DBpedia BERT large UDA Error 1.09 # 14
Image Classification ImageNet ResNet-50 (UDA) Top 1 Accuracy 79.04% # 729
Semi-Supervised Image Classification ImageNet - 10% labeled data UDA Top 5 Accuracy 88.52 # 29
Sentiment Analysis IMDb BERT large finetune UDA Accuracy 95.8 # 9
Sentiment Analysis IMDb BERT large Accuracy 95.49 # 14
Semi-Supervised Image Classification SVHN, 1000 labels UDA Accuracy 97.54 # 5
Text Classification Yelp-2 BERT Finetune + UDA Accuracy 97.95% # 3
Text Classification Yelp-5 BERT Finetune + UDA Accuracy 67.92% # 6
Sentiment Analysis Yelp Binary classification BERT large Error 1.89 # 4
Sentiment Analysis Yelp Binary classification BERT large finetune UDA Error 2.05 # 6
Sentiment Analysis Yelp Fine-grained classification BERT large Error 29.32 # 3
Sentiment Analysis Yelp Fine-grained classification BERT large finetune UDA Error 32.08 # 8

Methods