Continuous-time Lower Bounds for Gradient-based Algorithms

ICML 2020  ·  Michael Muehlebach, Michael I. Jordan ·

This article derives lower bounds on the convergence rate of continuous-time gradient-based optimization algorithms. The algorithms are subjected to a time-normalization constraint that avoids a reparametrization of time in order to make the discussion of continuous-time convergence rates meaningful. We reduce the multi-dimensional problem to a single dimension, recover well-known lower bounds from the discrete-time setting, and provide insight into why these lower bounds occur. We present algorithms that achieve the proposed lower bounds, even when the function class under consideration includes certain nonconvex functions.

PDF Abstract ICML 2020 PDF
No code implementations yet. Submit your code now

Categories


Optimization and Control Systems and Control Systems and Control