Generalized Inner Loop Meta-Learning

ICLR 2020 Edward GrefenstetteBrandon AmosDenis YaratsPhu Mon HtutArtem MolchanovFranziska MeierDouwe KielaKyunghyun ChoSoumith Chintala

Many (but not all) approaches self-qualifying as "meta-learning" in deep learning and reinforcement learning fit a common pattern of approximating the solution to a nested optimization problem. In this paper, we give a formalization of this shared pattern, which we call GIMLI, prove its general requirements, and derive a general-purpose algorithm for implementing similar approaches... (read more)

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.