Meta Learning with Minimax Regularization

29 Sep 2021  ·  Lianzhe Wang, Shiji Zhou, Shanghang Zhang, Wenpeng Zhang, Heng Chang, Wenwu Zhu ·

Even though meta-learning has attracted research wide attention in recent years, the generalization problem of meta-learning is still not well addressed. Existing works focus on meta-generalization to unseen tasks at the meta-level, while ignoring that adapted-models may not be generalized to the tasks domain at the adaptation-level, which can not be solved trivially. To this end, we propose a new regularization mechanism for meta-learning -- Minimax-Meta Regularization. Especially, we maximize the regularizer in the inner-loop to encourage the adapted-model to be more sensitive to the new task, and minimize the regularizer in the outer-loop to resist overfitting of the meta-model. This adversarial regularization forces the meta-algorithm to maintain generality at the meta-level while it is easy to learn specific assumptions at the task-specific level, thereby improving the generalization of meta-learning. We conduct extensive experiments on the representative meta-learning scenarios to verify our proposed method, including few-shot learning and robust reweighting. The results show that our method consistently improves the performance of the meta-learning algorithms and demonstrates the effectiveness of Minimax-Meta Regularization.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here