ATPL: Mutually enhanced adversarial training and pseudo labeling for unsupervised domain adaptation

Unsupervised domain adaptation aims to transfer knowledge from a labeled source domain to a related but unlabeled target domain. Most existing approaches either adversarially reduce the domain shift or use pseudo-labels to provide category information during adaptation. However, an adversarial training method may sacrifice the discriminability of the target data, since no category information is available. Moreover, a pseudo labeling method is difficult to produce high-confidence samples, since the classifier is often source-trained and there exists the domain discrepancy. Thus, it may have a negative influence on learning target representations. A potential solution is to make them compensate each other to simultaneously guarantee the feature transferability and discriminability, which are the two key criteria of feature representations in domain adaptation. In this paper, we propose a novel method named ATPL, which mutually promotes Adversarial Training and Pseudo Labeling for unsupervised domain adaptation. ATPL can produce high-confidence pseudo-labels by adversarial training. Accordingly, ATPL will use the pseudo-labeled information to improve the adversarial training process, which can guarantee the feature transferability by generating adversarial data to fill in the domain gap. Those pseudo-labels can also boost the feature discriminability. Extensive experiments on real datasets demonstrate that the proposed ATPL method outperforms state-of-the-art unsupervised domain adaptation methods.

PDF

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here