no code implementations • 16 Jan 2019 • Aram Harrow, John Napp
We define a simple class of problems for which a variational algorithm based on low-depth gradient measurements and stochastic gradient descent converges to the optimum substantially faster than any possible strategy based on estimating the objective function itself, and show that stochastic gradient descent is essentially optimal for this problem.
Quantum Physics Optimization and Control