Meta-Learning with Warped Gradient Descent

ICLR 2020 Sebastian FlennerhagAndrei A. RusuRazvan PascanuFrancesco VisinHujun YinRaia Hadsell

Learning an efficient update rule from data that promotes rapid learning of new tasks from the same distribution remains an open problem in meta-learning. Typically, previous works have approached this issue either by attempting to train a neural network that directly produces updates or by attempting to learn better initialisations or scaling factors for a gradient-based update rule... (read more)

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.