Search Results for author: Brett W. Larsen

Found 2 papers, 1 papers with code

How many degrees of freedom do we need to train deep networks: a loss landscape perspective

1 code implementation13 Jul 2021 Brett W. Larsen, Stanislav Fort, Nic Becker, Surya Ganguli

A variety of recent works, spanning pruning, lottery tickets, and training within random subspaces, have shown that deep neural networks can be trained using far fewer degrees of freedom than the total number of parameters.

Avoiding Spurious Local Minima in Deep Quadratic Networks

no code implementations31 Dec 2019 Abbas Kazemipour, Brett W. Larsen, Shaul Druckmann

Despite their practical success, a theoretical understanding of the loss landscape of neural networks has proven challenging due to the high-dimensional, non-convex, and highly nonlinear structure of such models.

Cannot find the paper you are looking for? You can Submit a new open access paper.