Unsupervised Domain Adaptation via Minimized Joint Error

1 Jan 2021  ·  Dexuan Zhang, Tatsuya Harada ·

Unsupervised domain adaptation transfers knowledge from learned source domain to a different (but related) target distribution, for which only few or no labeled data is available. Some researchers proposed upper bounds for the target error when transferring the knowledge, i.e.,Ben-David et al. (2010) established a theory based on minimizing the source error and distance between marginal distributions simultaneously. However, in most works the joint error is usually ignored. In this paper, we argue that the joint error is essential for the domain adaptation problem, in particular if the samples from different classes in source/target are closely aligned when matching the marginal distributions. To tackle this problem, we propose a novel upper bound that includes the joint error. Moreover, we utilize a constrained hypothesis space to further tighten up this bound. Furthermore, we propose a novel cross margin discrepancy to measure the dissimilarity between hypotheses. We show in this paper that the new cross margin discrepancy is able to alleviate instability during adversarial learning. In addition, we present extensive empirical evidence that shows that our proposal outperforms related approaches in image classification error rates on standard domain adaptation benchmarks.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here