no code implementations • 24 Oct 2023 • Leonardo Petrini
Artificial intelligence, particularly the subfield of machine learning, has seen a paradigm shift towards data-driven models that learn from and adapt to data.
1 code implementation • 5 Jul 2023 • Francesco Cagnetta, Leonardo Petrini, Umberto M. Tomasini, Alessandro Favero, Matthieu Wyart
The model is a classification task where each class corresponds to a group of high-level features, chosen among several equivalent groups associated with the same class.
1 code implementation • 4 Oct 2022 • Umberto M. Tomasini, Leonardo Petrini, Francesco Cagnetta, Matthieu Wyart
Here, we (i) show empirically for various architectures that stability to image diffeomorphisms is achieved by both spatial and channel pooling, (ii) introduce a model scale-detection task which reproduces our empirical observations on spatial pooling and (iii) compute analitically how the sensitivity to diffeomorphisms and noise scales with depth due to spatial pooling.
1 code implementation • 24 Jun 2022 • Leonardo Petrini, Francesco Cagnetta, Eric Vanden-Eijnden, Matthieu Wyart
It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data.
2 code implementations • NeurIPS 2021 • Leonardo Petrini, Alessandro Favero, Mario Geiger, Matthieu Wyart
Understanding why deep nets can classify data in large dimensions remains a challenge.
1 code implementation • 30 Dec 2020 • Mario Geiger, Leonardo Petrini, Matthieu Wyart
In this manuscript, we review recent results elucidating (i, ii) and the perspective they offer on the (still unexplained) curse of dimensionality paradox.
1 code implementation • 22 Jul 2020 • Jonas Paccolat, Leonardo Petrini, Mario Geiger, Kevin Tyloo, Matthieu Wyart
We confirm these predictions both for a one-hidden layer FC network trained on the stripe model and for a 16-layers CNN trained on MNIST, for which we also find $\beta_\text{Feature}>\beta_\text{Lazy}$.