Multi-Source Unsupervised Domain Adaptation
21 papers with code • 9 benchmarks • 5 datasets
Libraries
Use these libraries to find Multi-Source Unsupervised Domain Adaptation models and implementationsMost implemented papers
STEM: An Approach to Multi-Source Domain Adaptation With Guarantees
To address the second challenge, we propose to bridge the gap between the target domain and the mixture of source domains in the latent space via a generator or feature extractor.
MOST: Multi-Source Domain Adaptation via Optimal Transport for Student-Teacher Learning
To this end, we propose in this paper a novel model for multi-source DA using the theory of optimal transport and imitation learning.
Wasserstein Barycenter for Multi-Source Domain Adaptation
To overcome the challenges posed by this learning scenario, we propose a method for constructing an intermediate domain between sources and target domain, the Wasserstein Barycenter Transport (WBT).
Secure Domain Adaptation with Multiple Sources
Multi-source unsupervised domain adaptation (MUDA) is a framework to address the challenge of annotated data scarcity in a target domain via transferring knowledge from multiple annotated source domains.
Improving Transferability of Domain Adaptation Networks Through Domain Alignment Layers
Deep learning (DL) has been the primary approach used in various computer vision tasks due to its relevant results achieved on many tasks.
Seeking Similarities over Differences: Similarity-based Domain Alignment for Adaptive Object Detection
In order to robustly deploy object detectors across a wide range of scenarios, they should be adaptable to shifts in the input distribution without the need to constantly annotate new data.
Aligning Domain-specific Distribution and Classifier for Cross-domain Classification from Multiple Sources
However, in the practical scenario, labeled data can be typically collected from multiple diverse sources, and they might be different not only from the target domain but also from each other.
FACT: Federated Adversarial Cross Training
We propose Federated Adversarial Cross Training (FACT), which uses the implicit domain differences between source clients to identify domain shifts in the target domain.
Multi-Source Domain Adaptation through Dataset Dictionary Learning in Wasserstein Space
Based on our dictionary, we propose two novel methods for MSDA: DaDil-R, based on the reconstruction of labeled samples in the target domain, and DaDiL-E, based on the ensembling of classifiers learned on atom distributions.
MS3D++: Ensemble of Experts for Multi-Source Unsupervised Domain Adaption in 3D Object Detection
MS3D++ provides a straightforward approach to domain adaptation by generating high-quality pseudo-labels, enabling the adaptation of 3D detectors to a diverse range of lidar types, regardless of their density.