Search Results for author: James B Simon

Found 2 papers, 0 papers with code

SGD Can Converge to Local Maxima

no code implementations ICLR 2022 Liu Ziyin, Botao Li, James B Simon, Masahito Ueda

Stochastic gradient descent (SGD) is widely used for the nonlinear, nonconvex problem of training deep neural networks, but its behavior remains poorly understood.

Neural tangent kernel eigenvalues accurately predict generalization

no code implementations29 Sep 2021 James B Simon, Madeline Dickens, Michael Deweese

Finding a quantitative theory of neural network generalization has long been a central goal of deep learning research.

Inductive Bias

Cannot find the paper you are looking for? You can Submit a new open access paper.