3 code implementations • NeurIPS 2019 • Sebastian Goldt, Madhu S. Advani, Andrew M. Saxe, Florent Krzakala, Lenka Zdeborová
Deep neural networks achieve stellar generalisation even when they have enough parameters to easily fit all their training data.
no code implementations • 25 Jan 2019 • Sebastian Goldt, Madhu S. Advani, Andrew M. Saxe, Florent Krzakala, Lenka Zdeborová
Deep neural networks achieve stellar generalisation on a variety of problems, despite often being large enough to easily fit all their training data.
no code implementations • 5 Mar 2018 • Yao Zhang, Andrew M. Saxe, Madhu S. Advani, Alpha A. Lee
We derive a correspondence between parameter inference and free energy minimisation in statistical physics.
no code implementations • 10 Oct 2017 • Madhu S. Advani, Andrew M. Saxe
We study the practically-relevant "high-dimensional" regime where the number of free parameters in the network is on the order of or even larger than the number of examples in the dataset.