Multi-Source domain adaptation via supervised contrastive learning and confident consistency regularization

30 Jun 2021  ·  Marin Scalbert, Maria Vakalopoulou, Florent Couzinié-Devy ·

Multi-Source Unsupervised Domain Adaptation (multi-source UDA) aims to learn a model from several labeled source domains while performing well on a different target domain where only unlabeled data are available at training time. To align source and target features distributions, several recent works use source and target explicit statistics matching such as features moments or class centroids. Yet, these approaches do not guarantee class conditional distributions alignment across domains. In this work, we propose a new framework called Contrastive Multi-Source Domain Adaptation (CMSDA) for multi-source UDA that addresses this limitation. Discriminative features are learned from interpolated source examples via cross entropy minimization and from target examples via consistency regularization and hard pseudo-labeling. Simultaneously, interpolated source examples are leveraged to align source class conditional distributions through an interpolated version of the supervised contrastive loss. This alignment leads to more general and transferable features which further improves the generalization on the target domain. Extensive experiments have been carried out on three standard multi-source UDA datasets where our method reports state-of-the-art results.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multi-Source Unsupervised Domain Adaptation DomainNet CMSDA Accuracy 50.42 # 3
Multi-Source Unsupervised Domain Adaptation MiniDomainNet CMSDA Accuracy 61.9 # 1
Multi-Source Unsupervised Domain Adaptation Office-Home CMSDA Accuracy 76.6 # 1

Methods


No methods listed for this paper. Add relevant methods here