Connection-Adaptive Meta-Learning

1 Jan 2021  ·  Yadong Ding, Yu Wu, Chengyue Huang, Siliang Tang, Yi Yang, Yueting Zhuang ·

Meta-learning enables models to adapt to new environments rapidly with a few training examples. Current gradient-based meta-learning methods concentrate on finding good initialization (meta-weights) for learners, but ignore the impact of neural architectures. In this paper, we aim to obtain better meta-learners by co-optimizing the architecture and meta-weights simultaneously. Existing NAS-based methods apply a two-stage strategy,i.e., first searching architectures and then re-training meta-weights for the searched architecture. However, this two-stage strategy would lead to a suboptimal meta-learner, since the meta-weights are overlooked during searching architectures for meta-learning. Differently, we propose a more efficient and effective method for meta-learning, namely Connection-Adaptive Meta-learning (CAML), which jointly searches architectures and train the meta-weights on consolidated connections. During searching, we consolidate the architecture connections layer by layer, in which the layer with the largest weight value would be fixed first. With searching for only once, our CAML is able to obtain both adaptive architecture and meta-weights fo meta-learning. Extensive experiments show that CAML achieves state-of-the-art performance with 130x less computational cost, revealing our method’s effectiveness and efficiency.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here