Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.
#8 best model for Conditional Image Generation on CIFAR-10
We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.
SOTA for CCG Supertagging on CCGBank
However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale.
The success of deep neural networks often relies on a large amount of labeled examples, which can be difficult to obtain in many real scenarios.
The method incorporates rotation invariance into the feature learning framework, one of many good and well-studied properties of visual representation, which is rarely appreciated or exploited by previous deep convolutional neural network based self-supervised representation learning methods.
In contrast to a traditional view where the discriminator learns a constant function when reaching convergence, here we show that it can provide useful information for downstream tasks, e. g., feature extraction for classification.
We consider spatial contexts, for which we solve so-called jigsaw puzzles, i. e., each image is cut into grids and then disordered, and the goal is to recover the correct configuration.