no code implementations • 14 Sep 2023 • Mike Nguyen, Nicole Mücke
We analyze the generalization properties of two-layer neural networks in the neural tangent kernel (NTK) regime, trained with gradient descent (GD).
no code implementations • 29 Aug 2023 • Mike Nguyen, Nicole Mücke
Random feature approximation is arguably one of the most popular techniques to speed up kernel methods in large scale algorithms and provides a theoretical approach to the analysis of deep neural networks.
no code implementations • 20 Oct 2022 • Mike Nguyen, Charly Kirst, Nicole Mücke
We consider distributed learning using constant stepsize SGD (DSGD) over several devices, each sending a final model update to a central server.