Cross Domain Ensemble Distillation for Domain Generalization

29 Sep 2021  ·  kyungmoon lee, Sungyeon Kim, Suha Kwak ·

For domain generalization, the task of learning a model that generalizes to unseen target domains utilizing multiple source domains, many approaches explicitly align the distribution of the domains. However, the optimization for domain alignment has a risk of overfitting since the target domain is not available. To address the issue, this paper proposes a method for domain generalization by employing self-distillation. The proposed method aims to train a model robust to domain shift by allowing meaningful erroneous predictions in multiple domains. Specifically, our method matches the ensemble of predictive distributions of data with the same class label but different domains with each predictive distribution. We also propose a de-stylization method that standardizes feature maps of images to help produce consistent predictions. Image classification experiments on two benchmarks demonstrated that the proposed method greatly improves performance in both single-source and multi-source settings. We also show that the proposed method works effectively in person-reID experiments. In all experiments, our method significantly improves the performance.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Domain Generalization Office-Home XDED (ResNet-18) Average Accuracy 67.4 # 31
Domain Generalization PACS XDED (ResNet-18) Average Accuracy 86.4 # 39

Methods


No methods listed for this paper. Add relevant methods here