Curriculum Graph Co-Teaching for Multi-Target Domain Adaptation

In this paper we address multi-target domain adaptation (MTDA), where given one labeled source dataset and multiple unlabeled target datasets that differ in data distributions, the task is to learn a robust predictor for all the target domains. We identify two key aspects that can help to alleviate multiple domain-shifts in the MTDA: feature aggregation and curriculum learning. To this end, we propose Curriculum Graph Co-Teaching (CGCT) that uses a dual classifier head, with one of them being a graph convolutional network (GCN) which aggregates features from similar samples across the domains. To prevent the classifiers from over-fitting on its own noisy pseudo-labels we develop a co-teaching strategy with the dual classifier head that is assisted by curriculum learning to obtain more reliable pseudo-labels. Furthermore, when the domain labels are available, we propose Domain-aware Curriculum Learning (DCL), a sequential adaptation strategy that first adapts on the easier target domains, followed by the harder ones. We experimentally demonstrate the effectiveness of our proposed frameworks on several benchmarks and advance the state-of-the-art in the MTDA by large margins (e.g. +5.6% on the DomainNet).

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Blended-target Domain Adaptation DomainNet CGCT Average Accuracy 32.3 # 2
Multi-target Domain Adaptation DomainNet DCGCT Accuracy 34.4 # 2
Blended-target Domain Adaptation Office-31 DCGCT Average Accuracy 88.2 # 2
Multi-target Domain Adaptation Office-31 DCGCT Accuracy 88.8 # 2
Multi-target Domain Adaptation Office-Home DCGCT Accuracy 69.8 # 2
Blended-target Domain Adaptation Office-Home CGCT Average Accuracy 66.5 # 2


No methods listed for this paper. Add relevant methods here