Search Results for author: Michael Field

Found 7 papers, 0 papers with code

Annihilation of Spurious Minima in Two-Layer ReLU Networks

no code implementations12 Oct 2022 Yossi Arjevani, Michael Field

We study the optimization problem associated with fitting two-layer ReLU neural networks with respect to the squared loss, where labels are generated by a target network.

Vocal Bursts Valence Prediction

Analytic Study of Families of Spurious Minima in Two-Layer ReLU Neural Networks: A Tale of Symmetry II

no code implementations NeurIPS 2021 Yossi Arjevani, Michael Field

In particular, we derive analytic estimates for the loss at different minima, and prove that modulo $O(d^{-1/2})$-terms the Hessian spectrum concentrates near small positive constants, with the exception of $\Theta(d)$ eigenvalues which grow linearly with~$d$.

Equivariant bifurcation, quadratic equivariants, and symmetry breaking for the standard representation of $S_n$

no code implementations6 Jul 2021 Yossi Arjevani, Michael Field

Motivated by questions originating from the study of a class of shallow student-teacher neural networks, methods are developed for the analysis of spurious minima in classes of gradient equivariant dynamics related to neural nets.

Symmetry Breaking in Symmetric Tensor Decomposition

no code implementations10 Mar 2021 Yossi Arjevani, Joan Bruna, Michael Field, Joe Kileel, Matthew Trager, Francis Williams

In this note, we consider the highly nonconvex optimization problem associated with computing the rank decomposition of symmetric tensors.

Tensor Decomposition

Analytic Characterization of the Hessian in Shallow ReLU Models: A Tale of Symmetry

no code implementations NeurIPS 2020 Yossi Arjevani, Michael Field

We consider the optimization problem associated with fitting two-layers ReLU networks with respect to the squared loss, where labels are generated by a target network.

Symmetry & critical points for a model shallow neural network

no code implementations23 Mar 2020 Yossi Arjevani, Michael Field

We consider the optimization problem associated with fitting two-layer ReLU networks with $k$ hidden neurons, where labels are assumed to be generated by a (teacher) neural network.

On the Principle of Least Symmetry Breaking in Shallow ReLU Models

no code implementations26 Dec 2019 Yossi Arjevani, Michael Field

We consider the optimization problem associated with fitting two-layer ReLU networks with respect to the squared loss, where labels are assumed to be generated by a target network.

Cannot find the paper you are looking for? You can Submit a new open access paper.