336 papers with code • 0 benchmarks • 5 datasets
A recent work from Bello shows that training and scaling strategies may be more significant than model architectures for visual recognition.
Ranked #12 on Action Classification on Kinetics-600
Our representations are learned using a contrastive loss, where two augmented clips from the same short video are pulled together in the embedding space, while clips from different videos are pushed away.
Ranked #1 on Self-Supervised Action Recognition on Kinetics-600
This paper presents SimCLR: a simple framework for contrastive learning of visual representations.
To evaluate the power of the learned representations, in addition to the conventional fully-supervised action recognition settings, we introduce a novel task called single-shot cross-view action recognition.
We first learn self-supervised representations from one-class data, and then build one-class classifiers on learned representations.
Ranked #4 on Anomaly Detection on One-class CIFAR-100
We introduce COLA, a self-supervised pre-training approach for learning a general-purpose representation of audio.
Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.
Ranked #246 on Image Classification on ImageNet
This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.
Ranked #55 on Self-Supervised Image Classification on ImageNet
In this work, we present an information-theoretic framework that formulates cross-lingual language model pre-training as maximizing mutual information between multilingual-multi-granularity texts.
Ranked #6 on Zero-Shot Cross-Lingual Transfer on XTREME
Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.
Ranked #7 on Person Re-Identification on SYSU-30k (using extra training data)