HybridNet: Classification and Reconstruction Cooperation for Semi-Supervised Learning

ECCV 2018  ·  Thomas Robert, Nicolas Thome, Matthieu Cord ·

In this paper, we introduce a new model for leveraging unlabeled data to improve generalization performances of image classifiers: a two-branch encoder-decoder architecture called HybridNet. The first branch receives supervision signal and is dedicated to the extraction of invariant class-related representations. The second branch is fully unsupervised and dedicated to model information discarded by the first branch to reconstruct input data. To further support the expected behavior of our model, we propose an original training objective. It favors stability in the discriminative branch and complementarity between the learned representations in the two branches. HybridNet is able to outperform state-of-the-art results on CIFAR-10, SVHN and STL-10 in various semi-supervised settings. In addition, visualizations and ablation studies validate our contributions and the behavior of the model on both CIFAR-10 and STL-10 datasets.

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Classification STL-10 ResNet baseline Percentage correct 82.00 # 59
Image Classification STL-10 HybridNet Percentage correct 84.10 # 53
Image Classification STL-10 SWWAE Percentage correct 74.33 # 77

Methods


No methods listed for this paper. Add relevant methods here