Search Results for author: Cédric Gerbelot

Found 10 papers, 3 papers with code

Permutation recovery of spikes in noisy high-dimensional tensor estimation

no code implementations19 Dec 2024 Gérard Ben Arous, Cédric Gerbelot, Vanessa Piccolo

We determine the sample complexity required for gradient flow to efficiently recover all spikes, without imposing any assumptions on the separation of the signal-to-noise ratios (SNRs).

Stochastic gradient descent in high dimensions for multi-spiked tensor PCA

no code implementations23 Oct 2024 Gérard Ben Arous, Cédric Gerbelot, Vanessa Piccolo

The order in which correlations become macroscopic depends on their initial values and the corresponding SNRs, leading to either exact recovery or recovery of a permutation of the spikes.

Langevin dynamics for high-dimensional optimization: the case of multi-spiked tensor PCA

no code implementations12 Aug 2024 Gérard Ben Arous, Cédric Gerbelot, Vanessa Piccolo

We study nonconvex optimization in high dimensions through Langevin dynamics, focusing on the multi-spiked tensor PCA problem.

Applying statistical learning theory to deep learning

no code implementations26 Nov 2023 Cédric Gerbelot, Avetik Karagulyan, Stefani Karp, Kavya Ravichandran, Menachem Stern, Nathan Srebro

Although statistical learning theory provides a robust framework to understand supervised learning, many theoretical aspects of deep learning remain unclear, in particular how different architectures may lead to inductive bias when trained using gradient based methods.

Deep Learning Inductive Bias +2

Learning curves for the multi-class teacher-student perceptron

1 code implementation22 Mar 2022 Elisabetta Cornacchia, Francesca Mignacco, Rodrigo Veiga, Cédric Gerbelot, Bruno Loureiro, Lenka Zdeborová

For Gaussian teacher weights, we investigate the performance of ERM with both cross-entropy and square losses, and explore the role of ridge regularisation in approaching Bayes-optimality.

Binary Classification Learning Theory +1

Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics for Convex Losses in High-Dimension

no code implementations31 Jan 2022 Bruno Loureiro, Cédric Gerbelot, Maria Refinetti, Gabriele Sicuro, Florent Krzakala

From the sampling of data to the initialisation of parameters, randomness is ubiquitous in modern Machine Learning practice.

Graph-based Approximate Message Passing Iterations

no code implementations24 Sep 2021 Cédric Gerbelot, Raphaël Berthier

Approximate-message passing (AMP) algorithms have become an important element of high-dimensional statistical inference, mostly due to their adaptability and concentration properties, the state evolution (SE) equations.

Learning curves of generic features maps for realistic datasets with a teacher-student model

1 code implementation NeurIPS 2021 Bruno Loureiro, Cédric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala, Marc Mézard, Lenka Zdeborová

While still solvable in a closed form, this generalization is able to capture the learning curves for a broad range of realistic data sets, thus redeeming the potential of the teacher-student framework.

Asymptotic errors for convex penalized linear regression beyond Gaussian matrices

no code implementations11 Feb 2020 Cédric Gerbelot, Alia Abbara, Florent Krzakala

We consider the problem of learning a coefficient vector $x_{0}$ in $R^{N}$ from noisy linear observations $y=Fx_{0}+w$ in $R^{M}$ in the high dimensional limit $M, N$ to infinity with $\alpha=M/N$ fixed.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.