Contrastive Learning

2162 papers with code • 1 benchmarks • 11 datasets

Contrastive Learning is a deep learning technique for unsupervised representation learning. The goal is to learn a representation of data such that similar instances are close together in the representation space, while dissimilar instances are far apart.

It has been shown to be effective in various computer vision and natural language processing tasks, including image retrieval, zero-shot learning, and cross-modal retrieval. In these tasks, the learned representations can be used as features for downstream tasks such as classification and clustering.

(Image credit: Schroff et al. 2015)

Libraries

Use these libraries to find Contrastive Learning models and implementations
7 papers
2,740
6 papers
1,355
See all 6 libraries.

Latest papers with no code

Metric Learning for 3D Point Clouds Using Optimal Transport

no code yet • Winter Conference on Applications of Computer Vision(WACV 2024) 2024

Learning embeddings of any data largely depends on the ability of the target space to capture semantic rela- tions.

Improving Composed Image Retrieval via Contrastive Learning with Scaling Positives and Negatives

no code yet • 17 Apr 2024

The Composed Image Retrieval (CIR) task aims to retrieve target images using a composed query consisting of a reference image and a modified text.

DACAD: Domain Adaptation Contrastive Learning for Anomaly Detection in Multivariate Time Series

no code yet • 17 Apr 2024

In this paper, we propose a novel Domain Adaptation Contrastive learning for Anomaly Detection in multivariate time series (DACAD) model to address this issue by combining UDA and contrastive representation learning.

Single-temporal Supervised Remote Change Detection for Domain Generalization

no code yet • 17 Apr 2024

In this paper, we propose a multimodal contrastive learning (ChangeCLIP) based on visual-language pre-training for change detection domain generalization.

Supervised Contrastive Vision Transformer for Breast Histopathological Image Classification

no code yet • 17 Apr 2024

We present a novel approach, Supervised Contrastive Vision Transformer (SupCon-ViT), for improving the classification of invasive ductal carcinoma in terms of accuracy and generalization by leveraging the inherent strengths and advantages of both transfer learning, i. e., pre-trained vision transformer, and supervised contrastive learning.

Reuse out-of-year data to enhance land cover mappingvia feature disentanglement and contrastive learning

no code yet • 17 Apr 2024

Typically, when creating a land cover (LC) map, precise ground truth data is collected through time-consuming and expensive field campaigns.

EMC$^2$: Efficient MCMC Negative Sampling for Contrastive Learning with Global Convergence

no code yet • 16 Apr 2024

We follow the global contrastive learning loss as introduced in SogCLR, and propose EMC$^2$ which utilizes an adaptive Metropolis-Hastings subroutine to generate hardness-aware negative samples in an online fashion during the optimization.

Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation

no code yet • 16 Apr 2024

We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.

Contextrast: Contextual Contrastive Learning for Semantic Segmentation

no code yet • 16 Apr 2024

Despite great improvements in semantic segmentation, challenges persist because of the lack of local/global contexts and the relationship between them.

Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition

no code yet • 15 Apr 2024

In this study, we propose a novel Joint Contrastive learning framework with Feature Alignment (JCFA) to address cross-corpus EEG-based emotion recognition.