Semi-Supervised Learning by Augmented Distribution Alignment

ICCV 2019  ·  Qin Wang, Wen Li, Luc van Gool ·

In this work, we propose a simple yet effective semi-supervised learning approach called Augmented Distribution Alignment. We reveal that an essential sampling bias exists in semi-supervised learning due to the limited number of labeled samples, which often leads to a considerable empirical distribution mismatch between labeled data and unlabeled data. To this end, we propose to align the empirical distributions of labeled and unlabeled data to alleviate the bias. On one hand, we adopt an adversarial training strategy to minimize the distribution distance between labeled and unlabeled data as inspired by domain adaptation works. On the other hand, to deal with the small sample size issue of labeled data, we also propose a simple interpolation strategy to generate pseudo training samples. Those two strategies can be easily implemented into existing deep neural networks. We demonstrate the effectiveness of our proposed approach on the benchmark SVHN and CIFAR10 datasets. Our code is available at \url{https://github.com/qinenergy/adanet}.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Semi-Supervised Image Classification CIFAR-10, 4000 Labels ADA-Net (ConvNet) Percentage error 8.72 # 34

Methods


No methods listed for this paper. Add relevant methods here