Search Results for author: Hugo Cui

Found 10 papers, 4 papers with code

Asymptotics of Learning with Deep Structured (Random) Features

no code implementations21 Feb 2024 Dominik Schröder, Daniil Dmitriev, Hugo Cui, Bruno Loureiro

For a large class of feature maps we provide a tight asymptotic characterisation of the test error associated with learning the readout layer, in the high-dimensional limit where the input dimension, hidden layer widths, and number of training samples are proportionally large.

Asymptotics of feature learning in two-layer networks after one gradient-step

1 code implementation7 Feb 2024 Hugo Cui, Luca Pesce, Yatin Dandi, Florent Krzakala, Yue M. Lu, Lenka Zdeborová, Bruno Loureiro

To our knowledge, our results provides the first tight description of the impact of feature learning in the generalization of two-layer neural networks in the large learning rate regime $\eta=\Theta_{d}(d)$, beyond perturbative finite width corrections of the conjugate and neural tangent kernels.

A phase transition between positional and semantic learning in a solvable model of dot-product attention

no code implementations6 Feb 2024 Hugo Cui, Freya Behrens, Florent Krzakala, Lenka Zdeborová

We investigate how a dot-product attention layer learns a positional attention matrix (with tokens attending to each other based on their respective positions) and a semantic attention matrix (with tokens attending to each other based on their meaning).

Analysis of learning a flow-based generative model from limited sample complexity

1 code implementation5 Oct 2023 Hugo Cui, Florent Krzakala, Eric Vanden-Eijnden, Lenka Zdeborová

We study the problem of training a flow-based generative model, parametrized by a two-layer autoencoder, to sample from a high-dimensional Gaussian mixture.

Denoising

Deterministic equivalent and error universality of deep random features learning

1 code implementation1 Feb 2023 Dominik Schröder, Hugo Cui, Daniil Dmitriev, Bruno Loureiro

Establishing this result requires proving a deterministic equivalent for traces of the deep random features sample covariance matrices which can be of independent interest.

Bayes-optimal Learning of Deep Random Networks of Extensive-width

no code implementations1 Feb 2023 Hugo Cui, Florent Krzakala, Lenka Zdeborová

We consider the problem of learning a target function corresponding to a deep, extensive-width, non-linear neural network with random Gaussian weights.

regression

Error Scaling Laws for Kernel Classification under Source and Capacity Conditions

no code implementations29 Jan 2022 Hugo Cui, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová

We find that our rates tightly describe the learning curves for this class of data sets, and are also observed on real data.

Classification

Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime

no code implementations NeurIPS 2021 Hugo Cui, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová

In this work, we unify and extend this line of work, providing characterization of all regimes and excess error decay rates that can be observed in terms of the interplay of noise and regularization.

regression

Learning curves of generic features maps for realistic datasets with a teacher-student model

1 code implementation NeurIPS 2021 Bruno Loureiro, Cédric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala, Marc Mézard, Lenka Zdeborová

While still solvable in a closed form, this generalization is able to capture the learning curves for a broad range of realistic data sets, thus redeeming the potential of the teacher-student framework.

Large deviations for the perceptron model and consequences for active learning

no code implementations9 Dec 2019 Hugo Cui, Luca Saglietti, Lenka Zdeborová

These large deviations then provide optimal achievable performance boundaries for any active learning algorithm.

Active Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.