A Stochastic Gradient Method with an Exponential Convergence _Rate for Finite Training Sets

We propose a new stochastic gradient method for optimizing the sum of
 a finite set of smooth functions, where the sum is strongly convex.
 While standard stochastic gradient methods
 converge at sublinear rates for this problem, the proposed method incorporates a memory of previous gradient values in order to achieve a linear convergence 
rate. In a machine learning context, numerical experiments indicate that the new algorithm can dramatically outperform standard
 algorithms, both in terms of optimizing the training error and reducing the test error quickly.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here