On the Relation Between the Sharpest Directions of DNN Loss and the SGD Step Length

Stochastic Gradient Descent (SGD) based training of neural networks with a large learning rate or a small batch-size typically ends in well-generalizing, flat regions of the weight space, as indicated by small eigenvalues of the Hessian of the training loss. However, the curvature along the SGD trajectory is poorly understood... (read more)

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract

Tasks


Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
SGD
Stochastic Optimization