# Second-Order Information in Non-Convex Stochastic Optimization: Power and Limitations

24 Jun 2020Yossi ArjevaniYair CarmonJohn C. DuchiDylan J. FosterAyush SekhariKarthik Sridharan

We design an algorithm which finds an $\epsilon$-approximate stationary point (with $\|\nabla F(x)\|\le \epsilon$) using $O(\epsilon^{-3})$ stochastic gradient and Hessian-vector products, matching guarantees that were previously available only under a stronger assumption of access to multiple queries with the same random seed. We prove a lower bound which establishes that this rate is optimal and---surprisingly---that it cannot be improved using stochastic $p$th order methods for any $p\ge 2$, even when the first $p$ derivatives of the objective are Lipschitz... (read more)

PDF Abstract

No code implementations yet. Submit your code now