Larger Norm More Transferable: An Adaptive Feature Norm Approach for Unsupervised Domain Adaptation

ICCV 2019  ·  Ruijia Xu, Guanbin Li, Jihan Yang, Liang Lin ·

Domain adaptation enables the learner to safely generalize into novel environments by mitigating domain shifts across distributions. Previous works may not effectively uncover the underlying reasons that would lead to the drastic model degradation on the target task. In this paper, we empirically reveal that the erratic discrimination of the target domain mainly stems from its much smaller feature norms with respect to that of the source domain. To this end, we propose a novel parameter-free Adaptive Feature Norm approach. We demonstrate that progressively adapting the feature norms of the two domains to a large range of values can result in significant transfer gains, implying that those task-specific features with larger norms are more transferable. Our method successfully unifies the computation of both standard and partial domain adaptation with more robustness against the negative transfer issue. Without bells and whistles but a few lines of code, our method substantially lifts the performance on the target task and exceeds state-of-the-arts by a large margin (11.5% on Office-Home and 17.1% on VisDA2017). We hope our simple yet effective approach will shed some light on the future research of transfer learning. Code is available at

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain Adaptation ImageCLEF-DA IAFN+ENT Accuracy 88.9 # 6
Domain Adaptation Office-31 IAFN+ENT Average Accuracy 87.1 # 20
Partial Domain Adaptation Office-Home SAFN Accuracy (%) 71.8 # 10
Domain Adaptation VisDA2017 IAFN Accuracy 76.1 # 20