Partial Domain Adaptation
18 papers with code • 5 benchmarks • 5 datasets
Partial Domain Adaptation is a transfer learning paradigm, which manages to transfer relevant knowledge from a large-scale source domain to a small-scale target domain.
Domain adaptation enables the learner to safely generalize into novel environments by mitigating domain shifts across distributions.
It can be characterized as (1) a non-adversarial DA method without explicitly deploying domain alignment, enjoying faster convergence speed; (2) a versatile approach that can handle four existing scenarios: Closed-Set, Partial-Set, Multi-Source, and Multi-Target DA, outperforming the state-of-the-art methods in these scenarios, especially on one of the largest and hardest datasets to date (7. 3% on DomainNet).
We present Partial Adversarial Domain Adaptation (PADA), which simultaneously alleviates negative transfer by down-weighing the data of outlier source classes for training both source classifier and domain adversary, and promotes positive transfer by matching the feature distributions in the shared label space.
Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation
Unsupervised domain adaptation (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Mini-batch optimal transport (m-OT) has been widely used recently to deal with the memory issue of OT in large-scale applications.
This paper proposes an importance weighted adversarial nets-based method for unsupervised domain adaptation, specific for partial domain adaptation where the target domain has less number of classes compared to the source domain.
Under the condition that target labels are unknown, the key challenge of PDA is how to transfer relevant examples in the shared classes to promote positive transfer, and ignore irrelevant ones in the specific classes to mitigate negative transfer.
While some methods address target settings with either partial or open-set categories, they assume that the particular setting is known a priori.
On one hand, negative transfer results in misclassification of target samples to the classes only present in the source domain.
Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related unlabeled target domain.