446 papers with code • 0 benchmarks • 5 datasets
Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away.
Ranked #1 on Self-Supervised Action Recognition on Kinetics-400 (using extra training data)
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
Ranked #4 on Self-Supervised Person Re-Identification on SYSU-30k
A recent work from Bello shows that training and scaling strategies may be more significant than model architectures for visual recognition.
Ranked #14 on Action Classification on Kinetics-600
We introduce COLA, a self-supervised pre-training approach for learning a general-purpose representation of audio.
Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.
Ranked #258 on Image Classification on ImageNet
In imitation learning, it is common to learn a behavior policy to match an unknown target policy via max-likelihood training on a collected set of target demonstrations.
Learning View-Disentangled Human Pose Representation by Contrastive Cross-View Mutual Information Maximization
To evaluate the power of the learned representations, in addition to the conventional fully-supervised action recognition settings, we introduce a novel task called single-shot cross-view action recognition.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
Ranked #4 on Anomaly Detection on One-class CIFAR-100
In this work, we present an information-theoretic framework that formulates cross-lingual language model pre-training as maximizing mutual information between multilingual-multi-granularity texts.
Ranked #10 on Zero-Shot Cross-Lingual Transfer on XTREME
This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.
Ranked #11 on Self-Supervised Image Classification on ImageNet (finetuned) (using extra training data)