Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.
We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.
SOTA for CCG Supertagging on CCGBank
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.
#9 best model for Conditional Image Generation on CIFAR-10
Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.
However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale.
We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning.
The success of deep neural networks often relies on a large amount of labeled examples, which can be difficult to obtain in many real scenarios.