EnAET: A Self-Trained framework for Semi-Supervised and Supervised Learning with Ensemble Transformations

21 Nov 2019  ·  Xiao Wang, Daisuke Kihara, Jiebo Luo, Guo-Jun Qi ·

Deep neural networks have been successfully applied to many real-world applications. However, such successes rely heavily on large amounts of labeled data that is expensive to obtain. Recently, many methods for semi-supervised learning have been proposed and achieved excellent performance. In this study, we propose a new EnAET framework to further improve existing semi-supervised methods with self-supervised information. To our best knowledge, all current semi-supervised methods improve performance with prediction consistency and confidence ideas. We are the first to explore the role of {\bf self-supervised} representations in {\bf semi-supervised} learning under a rich family of transformations. Consequently, our framework can integrate the self-supervised information as a regularization term to further improve {\it all} current semi-supervised methods. In the experiments, we use MixMatch, which is the current state-of-the-art method on semi-supervised learning, as a baseline to test the proposed EnAET framework. Across different datasets, we adopt the same hyper-parameters, which greatly improves the generalization ability of the EnAET framework. Experiment results on different datasets demonstrate that the proposed EnAET framework greatly improves the performance of current semi-supervised algorithms. Moreover, this framework can also improve {\bf supervised learning} by a large margin, including the extremely challenging scenarios with only 10 images per class. The code and experiment records are available in \url{https://github.com/maple-research-lab/EnAET}.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification CIFAR-10 EnAET Percentage correct 98.01 # 54
PARAMS 36.5M # 214
Top-1 Accuracy 98.01 # 19
Parameters 36.5M # 4
Image Classification CIFAR-100 EnAET Percentage correct 83.13 # 91
Semi-Supervised Image Classification cifar-100, 10000 Labels EnAET (WRN-28-2-Large) Percentage error 22.92 # 15
Semi-Supervised Image Classification cifar-100, 10000 Labels EnAET (WRN-28-2) Percentage error 26.93±0.21 # 21
Semi-Supervised Image Classification CIFAR-100, 1000 Labels EnAET Percentage correct 41.27 # 1
Semi-Supervised Image Classification CIFAR-100, 5000Labels EnAET Percentage correct 68.17 # 2
Semi-Supervised Image Classification cifar10, 250 Labels EnAET Percentage correct 92.4 # 2
Semi-Supervised Image Classification CIFAR-10, 4000 Labels EnAET Percentage error 4.18 # 10
Image Classification STL-10 EnAET Percentage correct 95.48 # 18
Semi-Supervised Image Classification STL-10 EnAET Accuracy 95.48 # 1
Semi-Supervised Image Classification STL-10, 1000 Labels EnAET Accuracy 91.96 # 6
Image Classification SVHN EnAET Percentage error 2.22 # 32
Semi-Supervised Image Classification SVHN, 1000 labels EnAET Accuracy 97.58 # 4
Semi-Supervised Image Classification SVHN, 250 Labels EnAET Accuracy 96.79 # 5