# Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK

9 Jul 2020Yuanzhi LiTengyu MaHongyang R. Zhang

We consider the dynamic of gradient descent for learning a two-layer neural network. We assume the input $x\in\mathbb{R}^d$ is drawn from a Gaussian distribution and the label of $x$ satisfies $f^{\star}(x) = a^{\top}|W^{\star}x|$, where $a\in\mathbb{R}^d$ is a nonnegative vector and $W^{\star} \in\mathbb{R}^{d\times d}$ is an orthonormal matrix... (read more)

PDF Abstract