Contrastive Learning

2076 papers with code • 1 benchmarks • 11 datasets

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Libraries

Use these libraries to find Contrastive Learning models and implementations
7 papers
2,708
6 papers
1,341
5 papers
3,048
See all 5 libraries.

Most implemented papers

A Simple Framework for Contrastive Learning of Visual Representations

google-research/simclr ICML 2020

This paper presents SimCLR: a simple framework for contrastive learning of visual representations.

Momentum Contrast for Unsupervised Visual Representation Learning

facebookresearch/moco CVPR 2020

This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.

Improved Baselines with Momentum Contrastive Learning

facebookresearch/moco 9 Mar 2020

Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.

Supervised Contrastive Learning

google-research/google-research NeurIPS 2020

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.

SimCSE: Simple Contrastive Learning of Sentence Embeddings

princeton-nlp/SimCSE EMNLP 2021

This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings.

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

facebookresearch/swav NeurIPS 2020

In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much.

Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination

zhirongw/lemniscate.pytorch 5 May 2018

Neural net classifiers trained on data with annotated class labels can also capture apparent visual similarity among categories without being directed to do so.

Contrastive Learning for Unpaired Image-to-Image Translation

taesungp/contrastive-unpaired-translation 30 Jul 2020

Furthermore, we draw negatives from within the input image itself, rather than from the rest of the dataset.

Contrastive Multiview Coding

HobbitLong/CMC ECCV 2020

We analyze key properties of the approach that make it work, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views we learn from, the better the resulting representation captures underlying scene semantics.

Self-Supervised Learning of Pretext-Invariant Representations

facebookresearch/vissl CVPR 2020

The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images.