92 papers with code • 2 benchmarks • 5 datasets
Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.
These leaderboards are used to track progress in Unsupervised Pre-training
LibrariesUse these libraries to find Unsupervised Pre-training models and implementations
Most implemented papers
TabTransformer: Tabular Data Modeling Using Contextual Embeddings
We propose TabTransformer, a novel deep tabular data modeling architecture for supervised and semi-supervised learning.
Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing.
A Transformer-based Framework for Multivariate Time Series Representation Learning
In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series.
How far can we go without convolution: Improving fully-connected networks
We propose ways to improve the performance of fully connected networks.
wav2vec: Unsupervised Pre-training for Speech Recognition
Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available.
Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
We further exhibit a new class of random orthogonal initial conditions on weights that, like unsupervised pre-training, enjoys depth independent learning times.
Multilingual Constituency Parsing with Self-Attention and Pre-Training
We show that constituency parsing benefits from unsupervised pre-training across a variety of languages and a range of pre-training conditions.
SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning
In this paper, we compose a trilogy of exploring the basic and generic supervision in the sequence from spatial, spatiotemporal and sequential perspectives.
Spatiotemporal Contrastive Video Representation Learning
Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away.
Self-training and Pre-training are Complementary for Speech Recognition
Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data.