Unsupervised Representation Learning
120 papers with code • 0 benchmarks • 2 datasets
We therefore propose Cross-View Training (CVT), a semi-supervised learning algorithm that improves the representations of a Bi-LSTM sentence encoder using a mix of labeled and unlabeled data.
Ranked #1 on CCG Supertagging on CCGbank
Specifically, we target semi-supervised classification performance, and we meta-learn an algorithm -- an unsupervised weight update rule -- that produces representations useful for this task.
In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications.
Ranked #8 on Image Clustering on Tiny-ImageNet
We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples.
Continual learning aims to improve the ability of modern learning systems to deal with non-stationary distributions, typically by attempting to learn a series of tasks sequentially.
However, in order to successfully learn those features, they usually require massive amounts of manually labeled data, which is both expensive and impractical to scale.
Ranked #79 on Self-Supervised Image Classification on ImageNet
In this way, labels and the network evolve shoulder-to-shoulder rather than alternatingly.
Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images.
Ranked #11 on Image Classification on STL-10 (using extra training data)