Contrastive Learning

2111 papers with code • 1 benchmarks • 11 datasets

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Libraries

Use these libraries to find Contrastive Learning models and implementations
7 papers
2,720
6 papers
1,348
5 papers
3,062
See all 5 libraries.

Most implemented papers

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

MishaLaskin/curl 8 Apr 2020

On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features.

Contrastive Learning of Medical Visual Representations from Paired Images and Text

yuhaozhang/convirt 2 Oct 2020

Existing work commonly relies on fine-tuning weights transferred from ImageNet pretraining, which is suboptimal due to drastically different image characteristics, or rule-based label extraction from the textual report data paired with medical images, which is inaccurate and hard to generalize.

Propagate Yourself: Exploring Pixel-Level Consistency for Unsupervised Visual Representation Learning

zdaxie/PixPro CVPR 2021

We argue that the power of contrastive learning has yet to be fully unleashed, as current methods are trained only on instance-level pretext tasks, leading to representations that may be sub-optimal for downstream tasks requiring dense pixel predictions.

Contrastive Learning for Compact Single Image Dehazing

GlassyWu/AECR-Net CVPR 2021

In this paper, we propose a novel contrastive regularization (CR) built upon contrastive learning to exploit both the information of hazy images and clear images as negative and positive samples, respectively.

Sigmoid Loss for Language Image Pre-Training

google-research/big_vision ICCV 2023

We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP).

Dense Contrastive Learning for Self-Supervised Visual Pre-Training

open-mmlab/mmselfsup CVPR 2021

Compared to the baseline method MoCo-v2, our method introduces negligible computation overhead (only <1% slower), but demonstrates consistently superior performance when transferring to downstream dense prediction tasks including object detection, semantic segmentation and instance segmentation; and outperforms the state-of-the-art methods by a large margin.

Pre-Trained Image Processing Transformer

huawei-noah/Pretrained-IPT CVPR 2021

To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs.

Model-Contrastive Federated Learning

adap/flower CVPR 2021

A key challenge in federated learning is to handle the heterogeneity of local data distribution across parties.

Unsupervised Dense Information Retrieval with Contrastive Learning

facebookresearch/contriever 16 Dec 2021

In this work, we explore the limits of contrastive learning as a way to train unsupervised dense retrievers and show that it leads to strong performance in various retrieval settings.

On Contrastive Learning for Likelihood-free Inference

conormdurkan/lfi ICML 2020

Likelihood-free methods perform parameter inference in stochastic simulator models where evaluating the likelihood is intractable but sampling synthetic data is possible.