Search Results for author: Cédric Gerbelot

Found 7 papers, 3 papers with code

Applying statistical learning theory to deep learning

no code implementations26 Nov 2023 Cédric Gerbelot, Avetik Karagulyan, Stefani Karp, Kavya Ravichandran, Menachem Stern, Nathan Srebro

Although statistical learning theory provides a robust framework to understand supervised learning, many theoretical aspects of deep learning remain unclear, in particular how different architectures may lead to inductive bias when trained using gradient based methods.

Inductive Bias Learning Theory +1

Learning curves for the multi-class teacher-student perceptron

1 code implementation22 Mar 2022 Elisabetta Cornacchia, Francesca Mignacco, Rodrigo Veiga, Cédric Gerbelot, Bruno Loureiro, Lenka Zdeborová

For Gaussian teacher weights, we investigate the performance of ERM with both cross-entropy and square losses, and explore the role of ridge regularisation in approaching Bayes-optimality.

Binary Classification Learning Theory +1

Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics for Convex Losses in High-Dimension

no code implementations31 Jan 2022 Bruno Loureiro, Cédric Gerbelot, Maria Refinetti, Gabriele Sicuro, Florent Krzakala

From the sampling of data to the initialisation of parameters, randomness is ubiquitous in modern Machine Learning practice.

Graph-based Approximate Message Passing Iterations

no code implementations24 Sep 2021 Cédric Gerbelot, Raphaël Berthier

Approximate-message passing (AMP) algorithms have become an important element of high-dimensional statistical inference, mostly due to their adaptability and concentration properties, the state evolution (SE) equations.

Learning curves of generic features maps for realistic datasets with a teacher-student model

1 code implementation NeurIPS 2021 Bruno Loureiro, Cédric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala, Marc Mézard, Lenka Zdeborová

While still solvable in a closed form, this generalization is able to capture the learning curves for a broad range of realistic data sets, thus redeeming the potential of the teacher-student framework.

Asymptotic errors for convex penalized linear regression beyond Gaussian matrices

no code implementations11 Feb 2020 Cédric Gerbelot, Alia Abbara, Florent Krzakala

We consider the problem of learning a coefficient vector $x_{0}$ in $R^{N}$ from noisy linear observations $y=Fx_{0}+w$ in $R^{M}$ in the high dimensional limit $M, N$ to infinity with $\alpha=M/N$ fixed.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.