Search Results for author: Maria Refinetti

Found 10 papers, 5 papers with code

Double Trouble in Double Descent: Bias and Variance(s) in the Lazy Regime

no code implementations ICML 2020 Stéphane d'Ascoli, Maria Refinetti, Giulio Biroli, Florent Krzakala

We demonstrate that the latter two contributions are the crux of the double descent: they lead to the overfitting peak at the interpolation threshold and to the decay of the test error upon overparametrization.

Forward Learning with Top-Down Feedback: Empirical and Analytical Characterization

no code implementations10 Feb 2023 Ravi Srinivasan, Francesca Mignacco, Martino Sorbaro, Maria Refinetti, Avi Cooper, Gabriel Kreiman, Giorgia Dellaferrera

"Forward-only" algorithms, which train neural networks while avoiding a backward pass, have recently gained attention as a way of solving the biologically unrealistic aspects of backpropagation.

Neural networks trained with SGD learn distributions of increasing complexity

1 code implementation21 Nov 2022 Maria Refinetti, Alessandro Ingrosso, Sebastian Goldt

The ability of deep neural networks to generalise well even when they interpolate their training data has been explained using various "simplicity biases".

Optimal learning rate schedules in high-dimensional non-convex optimization problems

no code implementations9 Feb 2022 Stéphane d'Ascoli, Maria Refinetti, Giulio Biroli

In this case, it is optimal to keep a large learning rate during the exploration phase to escape the non-convex region as quickly as possible, then use the convex criterion $\beta=1$ to converge rapidly to the solution.

Scheduling Vocal Bursts Intensity Prediction

Fluctuations, Bias, Variance & Ensemble of Learners: Exact Asymptotics for Convex Losses in High-Dimension

no code implementations31 Jan 2022 Bruno Loureiro, Cédric Gerbelot, Maria Refinetti, Gabriele Sicuro, Florent Krzakala

From the sampling of data to the initialisation of parameters, randomness is ubiquitous in modern Machine Learning practice.

The dynamics of representation learning in shallow, non-linear autoencoders

1 code implementation6 Jan 2022 Maria Refinetti, Sebastian Goldt

We derive a set of asymptotically exact equations that describe the generalisation dynamics of autoencoders trained with stochastic gradient descent (SGD) in the limit of high-dimensional inputs.

Representation Learning

Classifying high-dimensional Gaussian mixtures: Where kernel methods fail and neural networks succeed

1 code implementation23 Feb 2021 Maria Refinetti, Sebastian Goldt, Florent Krzakala, Lenka Zdeborová

Here, we show theoretically that two-layer neural networks (2LNN) with only a few hidden neurons can beat the performance of kernel learning on a simple Gaussian mixture classification task.

Image Classification

Align, then memorise: the dynamics of learning with feedback alignment

1 code implementation24 Nov 2020 Maria Refinetti, Stéphane d'Ascoli, Ruben Ohana, Sebastian Goldt

Direct Feedback Alignment (DFA) is emerging as an efficient and biologically plausible alternative to the ubiquitous backpropagation algorithm for training deep neural networks.

Double Trouble in Double Descent : Bias and Variance(s) in the Lazy Regime

2 code implementations2 Mar 2020 Stéphane d'Ascoli, Maria Refinetti, Giulio Biroli, Florent Krzakala

We obtain a precise asymptotic expression for the bias-variance decomposition of the test error, and show that the bias displays a phase transition at the interpolation threshold, beyond which it remains constant.

Cannot find the paper you are looking for? You can Submit a new open access paper.