Reiterative Domain Aware Multi-Target Adaptation

26 Aug 2021  ·  Sudipan Saha, Shan Zhao, Nasrullah Sheikh, Xiao Xiang Zhu ·

Most domain adaptation methods focus on single-source-single-target adaptation settings. Multi-target domain adaptation is a powerful extension in which a single classifier is learned for multiple unlabeled target domains. To build a multi-target classifier, it is important to have: a feature extractor that generalizes well across domains; and effective aggregation of features from the labeled source and different unlabeled target domains. Towards the first, we use the recently popular Transformer as a feature extraction backbone. Towards the second, we use a co-teaching-based approach using a dual-classifier head, one of which is based on the graph neural network. The proposed approach uses a sequential adaptation strategy that adapts one domain at a time starting from the target domains that are more similar to the source, assuming that the network finds it easier to adapt to such target domains. After adapting on each target, samples with a softmax-based confidence score greater than a threshold are added to the pseudo-source, thus aggregating knowledge from different domains. However, softmax is not entirely trustworthy as a confidence score and may generate a high score for unreliable samples if trained for many iterations. To mitigate this effect, we adopt a reiterative approach, where we reduce target adaptation iterations, however, reiterate multiple times over the target domains. The experimental evaluation on the Office-Home, Office-31 and DomainNet datasets shows significant improvement over the existing methods. We have achieved 10.7$\%$ average improvement in Office-Home dataset over the state-of-art methods.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods