1 code implementation • 16 Apr 2024 • Nadav Joseph Outmezguine, Noam Levi
With the success of deep neural networks (NNs) in a variety of domains, the computational and storage requirements for training and deploying large NNs have become a bottleneck for further improvements.
1 code implementation • 14 Feb 2024 • Jack Miller, Patrick Gleeson, Charles O'Neill, Thang Bui, Noam Levi
Neural networks sometimes exhibit grokking, a phenomenon where perfect or near-perfect performance is achieved on a validation set well after the same performance has been obtained on the corresponding training set.
no code implementations • 2 Nov 2023 • Noam Levi, Yaron Oz
We show that from the RMT perspective, the turbulence Gram matrices lie in the same universality class as quantum chaotic rather than integrable systems, and the data exhibits power-law scalings in the bulk of its eigenvalues which are vastly different from uncorrelated classical chaos, random data, natural images.
no code implementations • 25 Oct 2023 • Noam Levi, Alon Beck, Yohai Bar-Sinai
Grokking is the intriguing phenomenon where a model learns to generalize long after it has fit the training data.
no code implementations • 26 Jun 2023 • Noam Levi, Yaron Oz
We study universal traits which emerge both in real-world complex datasets, as well as in artificially generated ones.
no code implementations • 3 Apr 2023 • Theo Jules, Gal Brener, Tal Kachman, Noam Levi, Yohai Bar-Sinai
The training of neural networks is a complex, high-dimensional, non-convex and noisy optimization problem whose theoretical understanding is interesting both from an applicative perspective and for fundamental reasons.
no code implementations • 27 Oct 2022 • Noam Levi, Itay M. Bloch, Marat Freytsis, Tomer Volansky
We introduce Noise Injection Node Regularization (NINR), a method of injecting structured noise into Deep Neural Networks (DNN) during the training stage, resulting in an emergent regularizing effect.
no code implementations • 24 Oct 2022 • Noam Levi, Itay Bloch, Marat Freytsis, Tomer Volansky
We propose a new method to probe the learning mechanism of Deep Neural Networks (DNN) by perturbing the system using Noise Injection Nodes (NINs).