Contrastive Learning

336 papers with code • 0 benchmarks • 5 datasets

This task has no description! Would you like to contribute one?

Greatest papers with code

Revisiting 3D ResNets for Video Recognition

tensorflow/models 3 Sep 2021

A recent work from Bello shows that training and scaling strategies may be more significant than model architectures for visual recognition.

Action Classification Contrastive Learning +1

Spatiotemporal Contrastive Video Representation Learning

tensorflow/models CVPR 2021

Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away.

Contrastive Learning Data Augmentation +4

Learning View-Disentangled Human Pose Representation by Contrastive Cross-View Mutual Information Maximization

google-research/google-research CVPR 2021

To evaluate the power of the learned representations, in addition to the conventional fully-supervised action recognition settings, we introduce a novel task called single-shot cross-view action recognition.

Action Recognition Contrastive Learning +1

Learning and Evaluating Representations for Deep One-class Classification

google-research/google-research ICLR 2021

We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.

Anomaly Detection Classification +6

Contrastive Learning of General-Purpose Audio Representations

google-research/google-research 21 Oct 2020

We introduce COLA, a self-supervised pre-training approach for learning a general-purpose representation of audio.

Contrastive Learning

Supervised Contrastive Learning

google-research/google-research NeurIPS 2020

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.

Contrastive Learning Data Augmentation +3

InfoXLM: An Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training

microsoft/unilm NAACL 2021

In this work, we present an information-theoretic framework that formulates cross-lingual language model pre-training as maximizing mutual information between multilingual-multi-granularity texts.

Contrastive Learning Cross-Lingual Transfer +1

Improved Baselines with Momentum Contrastive Learning

facebookresearch/moco 9 Mar 2020

Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.

Ranked #7 on Person Re-Identification on SYSU-30k (using extra training data)

Contrastive Learning Data Augmentation +3