Deep multi-Wasserstein unsupervised domain adaptation

In unsupervised domain adaptation (DA), 1 aims at learning from labeled source data and fully unlabeled target examples a model with a low error on the target domain. In this setting, standard generalization bounds prompt us to minimize the sum of three terms: (a) the source true risk, (b) the divergence be- tween the source and target domains, and (c) the combined error of the ideal joint hypothesis over the two domains. Many DA methods – especially those using deep neural networks – have focused on the first two terms by using different divergence measures to align the source and target distributions on a shared latent feature space, while ignoring the third term, assuming it is negligible to perform the adaptation. However, it has been shown that purely aligning the two distributions while minimizing the source error may lead to so-called negative transfer . In this paper, we address this issue with a new deep unsupervised DA method – called MCDA – minimizing the first two terms while controlling the third one. MCDA benefits from highly-confident target samples (using softmax predictions) to minimize class- wise Wasserstein distances and efficiently approximate the ideal joint hypothesis. Empirical results show that our approach outperforms state of the art methods.

PDF Abstract


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here