Exploiting Style Transfer-based Task Augmentation for Cross-Domain Few-Shot Learning

19 Jan 2023  ·  Shuzhen Rao, Jun Huang, Zengming Tang ·

In cross-domain few-shot learning, the core issue is that the model trained on source domains struggles to generalize to the target domain, especially when the domain shift is large. Motivated by the observation that the domain shift between training tasks and target tasks usually can reflect in their style variation, we propose Task Augmented Meta-Learning (TAML) to conduct style transfer-based task augmentation to improve the domain generalization ability. Firstly, Multi-task Interpolation (MTI) is introduced to fuse features from multiple tasks with different styles, which makes more diverse styles available. Furthermore, a novel task-augmentation strategy called Multi-Task Style Transfer (MTST) is proposed to perform style transfer on existing tasks to learn discriminative style-independent features. We also introduce a Feature Modulation module (FM) to add random styles and improve generalization of the model. The proposed TAML increases the diversity of styles of training tasks, and contributes to training a model with better domain generalization ability. The effectiveness is demonstrated via theoretical analysis and thorough experiments on two popular cross-domain few-shot benchmarks.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here