Unsupervised Pre-training

122 papers with code • 2 benchmarks • 7 datasets

Pre-training a neural network using unsupervised (self-supervised) auxiliary tasks on unlabeled data.

Libraries

Use these libraries to find Unsupervised Pre-training models and implementations
2 papers
31,330

Most implemented papers

TabTransformer: Tabular Data Modeling Using Contextual Embeddings

lucidrains/tab-transformer-pytorch 11 Dec 2020

We propose TabTransformer, a novel deep tabular data modeling architecture for supervised and semi-supervised learning.

wav2vec: Unsupervised Pre-training for Speech Recognition

pytorch/fairseq 11 Apr 2019

Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available.

Leveraging Pre-trained Checkpoints for Sequence Generation Tasks

huggingface/transformers TACL 2020

Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing.

A Transformer-based Framework for Multivariate Time Series Representation Learning

gzerveas/mvts_transformer 6 Oct 2020

In this work we propose for the first time a transformer-based framework for unsupervised representation learning of multivariate time series.

How far can we go without convolution: Improving fully-connected networks

hantek/zlinnet 9 Nov 2015

We propose ways to improve the performance of fully connected networks.

Seasonal Contrast: Unsupervised Pre-Training from Uncurated Remote Sensing Data

ElementAI/seasonal-contrast ICCV 2021

Transfer learning approaches can reduce the data requirements of deep learning algorithms.

Multilingual Constituency Parsing with Self-Attention and Pre-Training

nikitakit/self-attentive-parser ACL 2019

We show that constituency parsing benefits from unsupervised pre-training across a variety of languages and a range of pre-training conditions.

Spatiotemporal Contrastive Video Representation Learning

tensorflow/models CVPR 2021

Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away.

SeCo: Exploring Sequence Supervision for Unsupervised Representation Learning

YihengZhang-CV/SeCo-Sequence-Contrastive-Learning 3 Aug 2020

In this paper, we compose a trilogy of exploring the basic and generic supervision in the sequence from spatial, spatiotemporal and sequential perspectives.

Self-training and Pre-training are Complementary for Speech Recognition

pytorch/fairseq 22 Oct 2020

Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data.