Generalization Properties and Implicit Regularization for Multiple Passes SGM

26 May 2016  ·  Junhong Lin, Raffaello Camoriano, Lorenzo Rosasco ·

We study the generalization properties of stochastic gradient methods for learning with convex loss functions and linearly parameterized functions. We show that, in the absence of penalizations or constraints, the stability and approximation properties of the algorithm can be controlled by tuning either the step-size or the number of passes over the data... In this view, these parameters can be seen to control a form of implicit regularization. Numerical results complement the theoretical findings. read more

PDF Abstract


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here