Self-Supervised Learning

1668 papers with code • 6 benchmarks • 40 datasets

Self-Supervised Learning is proposed for utilizing unlabeled data with the success of supervised learning. Producing a dataset with good labels is expensive, while unlabeled data is being generated all the time. The motivation of Self-Supervised Learning is to make use of the large amount of unlabeled data. The main idea of Self-Supervised Learning is to generate the labels from unlabeled data, according to the structure or characteristics of the data itself, and then train on this unsupervised data in a supervised manner. Self-Supervised Learning is wildly used in representation learning to make a model learn the latent features of the data. This technique is often employed in computer vision, video processing and robot control.

Source: Self-supervised Point Set Local Descriptors for Point Cloud Registration

Image source: LeCun

Libraries

Use these libraries to find Self-Supervised Learning models and implementations
14 papers
2,710
11 papers
1,342
7 papers
3,049
See all 9 libraries.

Most implemented papers

A Simple Framework for Contrastive Learning of Visual Representations

google-research/simclr ICML 2020

This paper presents SimCLR: a simple framework for contrastive learning of visual representations.

ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

google-research/ALBERT ICLR 2020

Increasing model size when pretraining natural language representations often results in improved performance on downstream tasks.

Masked Autoencoders Are Scalable Vision Learners

facebookresearch/mae CVPR 2022

Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.

Bootstrap your own latent: A new approach to self-supervised Learning

deepmind/deepmind-research 13 Jun 2020

From an augmented view of an image, we train the online network to predict the target network representation of the same image under a different augmented view.

Emerging Properties in Self-Supervised Vision Transformers

facebookresearch/dino ICCV 2021

In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets).

Barlow Twins: Self-Supervised Learning via Redundancy Reduction

facebookresearch/barlowtwins 4 Mar 2021

This causes the embedding vectors of distorted versions of a sample to be similar, while minimizing the redundancy between the components of these vectors.

Supervised Contrastive Learning

google-research/google-research NeurIPS 2020

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.

wav2vec 2.0: A Framework for Self-Supervised Learning of Speech Representations

pytorch/fairseq NeurIPS 2020

We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler.

TabNet: Attentive Interpretable Tabular Learning

google-research/google-research 20 Aug 2019

We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet.

COVID-CT-Dataset: A CT Scan Dataset about COVID-19

UCSD-AI4H/COVID-CT 30 Mar 2020

Using this dataset, we develop diagnosis methods based on multi-task learning and self-supervised learning, that achieve an F1 of 0. 90, an AUC of 0. 98, and an accuracy of 0. 89.