Unsupervised Pre-training
122 papers with code • 2 benchmarks • 7 datasets
Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.
Libraries
Use these libraries to find Unsupervised Pre-training models and implementationsMost implemented papers
TabTransformer: Tabular Data Modeling Using Contextual Embeddings
We propose TabTransformer, a novel deep tabular data modeling architecture for supervised and semi-supervised learning.
wav2vec: Unsupervised Pre-training for Speech Recognition
Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available.
Leveraging Pre-trained Checkpoints for Sequence Generation Tasks
Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing.
A Transformer-based Framework for Multivariate Time Series Representation Learning
In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series.
How far can we go without convolution: Improving fully-connected networks
We propose ways to improve the performance of fully connected networks.
Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data
Transfer learning approaches can reduce the data requirements of deep learning algorithms.
Multilingual Constituency Parsing with Self-Attention and Pre-Training
We show that constituency parsing benefits from unsupervised pre-training across a variety of languages and a range of pre-training conditions.
Spatiotemporal Contrastive Video Representation Learning
Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away.
SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning
In this paper, we compose a trilogy of exploring the basic and generic supervision in the sequence from spatial, spatiotemporal and sequential perspectives.
Self-training and Pre-training are Complementary for Speech Recognition
Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data.