Dynamically Decoding Source Domain Knowledge for Domain Generalization

6 Oct 2021  ·  Cuicui Kang, Karthik Nandakumar ·

Optimizing the performance of classifiers on samples from unseen domains remains a challenging problem. While most existing studies on domain generalization focus on learning domain-invariant feature representations, multi-expert frameworks have been proposed as a possible solution and have demonstrated promising performance. However, current multi-expert learning frameworks fail to fully exploit source domain knowledge during inference, resulting in sub-optimal performance. In this work, we propose to adapt Transformers for the purpose of dynamically decoding source domain knowledge for domain generalization. Specifically, we build one domain-specific local expert per source domain and one domain-agnostic feature branch as query. A Transformer encoder encodes all domain-specific features as source domain knowledge in memory. In the Transformer decoder, the domain-agnostic query interacts with the memory in the cross-attention module, and domains that are similar to the input will contribute more to the attention output. Thus, source domain knowledge gets dynamically decoded for inference of the current input from unseen domain. This mechanism enables the proposed method to generalize well to unseen domains. The proposed method has been evaluated on three benchmarks in the domain generalization field and shown to have the best performance compared to state-of-the-art methods.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods