Towards Unified and Effective Domain Generalization

16 Oct 2023  ·  Yiyuan Zhang, Kaixiong Gong, Xiaohan Ding, Kaipeng Zhang, Fangrui Lv, Kurt Keutzer, Xiangyu Yue ·

We propose $\textbf{UniDG}$, a novel and $\textbf{Uni}$fied framework for $\textbf{D}$omain $\textbf{G}$eneralization that is capable of significantly enhancing the out-of-distribution generalization performance of foundation models regardless of their architectures. The core idea of UniDG is to finetune models during the inference stage, which saves the cost of iterative training. Specifically, we encourage models to learn the distribution of test data in an unsupervised manner and impose a penalty regarding the updating step of model parameters. The penalty term can effectively reduce the catastrophic forgetting issue as we would like to maximally preserve the valuable knowledge in the original model. Empirically, across 12 visual backbones, including CNN-, MLP-, and Transformer-based models, ranging from 1.89M to 303M parameters, UniDG shows an average accuracy improvement of +5.4% on DomainBed. These performance results demonstrate the superiority and versatility of UniDG. The code is publicly available at https://github.com/invictus717/UniDG

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Domain Generalization DomainNet UniDG + CORAL + ConvNeXt-B Average Accuracy 59.5 # 7
Domain Generalization Office-Home UniDG + CORAL + ConvNeXt-B Average Accuracy 88.9 # 2
Domain Generalization PACS UniDG + CORAL + ConvNeXt-B Average Accuracy 95.6 # 10
Domain Generalization TerraIncognita UniDG + CORAL + ConvNeXt-B Average Accuracy 69.6 # 1
Domain Generalization VLCS UniDG + CORAL + ConvNeXt-B Average Accuracy 84.5 # 2

Methods


No methods listed for this paper. Add relevant methods here