Contrastive Learning

1119 papers with code • 1 benchmarks • 9 datasets

This task has no description! Would you like to contribute one?


Use these libraries to find Contrastive Learning models and implementations
7 papers
5 papers
5 papers
See all 5 libraries.

Most implemented papers

A Simple Framework for Contrastive Learning of Visual Representations

google-research/simclr ICML 2020

This paper presents SimCLR: a simple framework for contrastive learning of visual representations.

Momentum Contrast for Unsupervised Visual Representation Learning

facebookresearch/moco CVPR 2020

This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.

Improved Baselines with Momentum Contrastive Learning

facebookresearch/moco 9 Mar 2020

Contrastive unsupervised learning has recently shown encouraging progress, e. g., in Momentum Contrast (MoCo) and SimCLR.

Supervised Contrastive Learning

google-research/google-research NeurIPS 2020

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models.

SimCSE: Simple Contrastive Learning of Sentence Embeddings

princeton-nlp/SimCSE EMNLP 2021

This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings.

Unsupervised Learning of Visual Features by Contrasting Cluster Assignments

facebookresearch/swav NeurIPS 2020

In addition, we also propose a new data augmentation strategy, multi-crop, that uses a mix of views with different resolutions in place of two full-resolution views, without increasing the memory or compute requirements much.

Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination

zhirongw/lemniscate.pytorch 5 May 2018

Neural net classifiers trained on data with annotated class labels can also capture apparent visual similarity among categories without being directed to do so.

Self-Supervised Learning of Pretext-Invariant Representations

facebookresearch/vissl CVPR 2020

The goal of self-supervised learning from images is to construct image representations that are semantically meaningful via pretext tasks that do not require semantic annotations for a large training set of images.

Contrastive Learning for Unpaired Image-to-Image Translation

taesungp/contrastive-unpaired-translation 30 Jul 2020

Furthermore, we draw negatives from within the input image itself, rather than from the rest of the dataset.

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning

zdaxie/PixPro CVPR 2021

We argue that the power of contrastive learning has yet to be fully unleashed, as current methods are trained only on instance-level pretext tasks, leading to representations that may be sub-optimal for downstream tasks requiring dense pixel predictions.