Search Results for author: Francesco Cagnetta

Found 7 papers, 4 papers with code

Towards a theory of how the structure of language is acquired by deep neural networks

no code implementations28 May 2024 Francesco Cagnetta, Matthieu Wyart

We conjecture that the relationship between training set size and effective range of correlations holds beyond our synthetic datasets.

Language Modelling

How Deep Neural Networks Learn Compositional Data: The Random Hierarchy Model

1 code implementation5 Jul 2023 Francesco Cagnetta, Leonardo Petrini, Umberto M. Tomasini, Alessandro Favero, Matthieu Wyart

The model is a classification task where each class corresponds to a group of high-level features, chosen among several equivalent groups associated with the same class.

Kernels, Data & Physics

no code implementations5 Jul 2023 Francesco Cagnetta, Deborah Oliveira, Mahalakshmi Sabanayagam, Nikolaos Tsilivis, Julia Kempe

Lecture notes from the course given by Professor Julia Kempe at the summer school "Statistical physics of Machine Learning" in Les Houches.

Adversarial Robustness Inductive Bias

How deep convolutional neural networks lose spatial information with training

1 code implementation4 Oct 2022 Umberto M. Tomasini, Leonardo Petrini, Francesco Cagnetta, Matthieu Wyart

Here, we (i) show empirically for various architectures that stability to image diffeomorphisms is achieved by both spatial and channel pooling, (ii) introduce a model scale-detection task which reproduces our empirical observations on spatial pooling and (iii) compute analitically how the sensitivity to diffeomorphisms and noise scales with depth due to spatial pooling.

What Can Be Learnt With Wide Convolutional Neural Networks?

1 code implementation1 Aug 2022 Francesco Cagnetta, Alessandro Favero, Matthieu Wyart

Interestingly, we find that, despite their hierarchical structure, the functions generated by infinitely-wide deep CNNs are too rich to be efficiently learnable in high dimension.

Learning sparse features can lead to overfitting in neural networks

1 code implementation24 Jun 2022 Leonardo Petrini, Francesco Cagnetta, Eric Vanden-Eijnden, Matthieu Wyart

It is widely believed that the success of deep networks lies in their ability to learn a meaningful representation of the features of the data.

Locality defeats the curse of dimensionality in convolutional teacher-student scenarios

no code implementations NeurIPS 2021 Alessandro Favero, Francesco Cagnetta, Matthieu Wyart

Convolutional neural networks perform a local and translationally-invariant treatment of the data: quantifying which of these two aspects is central to their success remains a challenge.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.