Multi-Source Unsupervised Domain Adaptation

21 papers with code • 9 benchmarks • 5 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Multi-Source Unsupervised Domain Adaptation models and implementations

Most implemented papers

STEM: An Approach to Multi-Source Domain Adaptation With Guarantees

anh-ntv/STEM_iccv21 ICCV 2021

To address the second challenge, we propose to bridge the gap between the target domain and the mixture of source domains in the latent space via a generator or feature extractor.

MOST: Multi-Source Domain Adaptation via Optimal Transport for Student-Teacher Learning

tuanrpt/MOST UAI 2021

To this end, we propose in this paper a novel model for multi-source DA using the theory of optimal transport and imitation learning.

Wasserstein Barycenter for Multi-Source Domain Adaptation

eddardd/WBTransport CVPR 2021

To overcome the challenges posed by this learning scenario, we propose a method for constructing an intermediate domain between sources and target domain, the Wasserstein Barycenter Transport (WBT).

Secure Domain Adaptation with Multiple Sources

serbanstan/secure-muda 23 Jun 2021

Multi-source unsupervised domain adaptation (MUDA) is a framework to address the challenge of annotated data scarcity in a target domain via transferring knowledge from multiple annotated source domains.

Improving Transferability of Domain Adaptation Networks Through Domain Alignment Layers

lucasfernando-aes/ms-dial 6 Sep 2021

Deep learning (DL) has been the primary approach used in various computer vision tasks due to its relevant results achieved on many tasks.

Seeking Similarities over Differences: Similarity-based Domain Alignment for Adaptive Object Detection

frezaeix/VISGA_Public ICCV 2021

In order to robustly deploy object detectors across a wide range of scenarios, they should be adaptable to shifts in the input distribution without the need to constantly annotate new data.

Aligning Domain-specific Distribution and Classifier for Cross-domain Classification from Multiple Sources

easezyc/deep-transfer-learning 4 Jan 2022

However, in the practical scenario, labeled data can be typically collected from multiple diverse sources, and they might be different not only from the target domain but also from each other.

FACT: Federated Adversarial Cross Training

jonas-lippl/fact 1 Jun 2023

We propose Federated Adversarial Cross Training (FACT), which uses the implicit domain differences between source clients to identify domain shifts in the target domain.

Multi-Source Domain Adaptation through Dataset Dictionary Learning in Wasserstein Space

eddardd/demo-dadil 27 Jul 2023

Based on our dictionary, we propose two novel methods for MSDA: DaDil-R, based on the reconstruction of labeled samples in the target domain, and DaDiL-E, based on the ensembling of classifiers learned on atom distributions.

MS3D++: Ensemble of Experts for Multi-Source Unsupervised Domain Adaption in 3D Object Detection

darrenjkt/ms3d 11 Aug 2023

MS3D++ provides a straightforward approach to domain adaptation by generating high-quality pseudo-labels, enabling the adaptation of 3D detectors to a diverse range of lidar types, regardless of their density.