# Finite Sample Convergence Rates of Zero-Order Stochastic Optimization Methods

We consider derivative-free algorithms for stochastic optimization problems that use only noisy function values rather than gradients, analyzing their finite-sample convergence rates. We show that if pairs of function values are available, algorithms that use gradient estimates based on random perturbations suffer a factor of at most $\sqrt{\dim}$ in convergence rate over traditional stochastic gradient methods, where $\dim$ is the dimension of the problem... (read more)

PDF Abstract

# Code Add Remove Mark official

No code implementations yet. Submit your code now