Online Learning Rate Adaptation with Hypergradient Descent

ICLR 2018 Atilim Gunes BaydinRobert CornishDavid Martinez RubioMark SchmidtFrank Wood

We introduce a general method for improving the convergence rate of gradient-based optimizers that is easy to implement and works well in practice. We demonstrate the effectiveness of the method in a range of optimization problems by applying it to stochastic gradient descent, stochastic gradient descent with Nesterov momentum, and Adam, showing that it significantly reduces the need for the manual tuning of the initial learning rate for these commonly used algorithms... (read more)

PDF Abstract ICLR 2018 PDF ICLR 2018 Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper