no code implementations • 18 Aug 2023 • Shuhui Wu, Zengming Tang, Zongyi Guo, Weiwei Zhang, Baoliang Cui, Haihong Tang, Weiming Lu
Simultaneously, we utilize open-domain datasets during training to improve the performance of PUMGPT and its generalization ability.
no code implementations • 19 Jan 2023 • Shuzhen Rao, Jun Huang, Zengming Tang
Motivated by the observation that the domain shift between training tasks and target tasks usually can reflect in their style variation, we propose Task Augmented Meta-Learning (TAML) to conduct style transfer-based task augmentation to improve the domain generalization ability.