1 code implementation • 12 Feb 2024 • Rodrigo Veiga, Anastasia Remizova, Nicolas Macris
We investigate the test risk of continuous-time stochastic gradient flow dynamics in learning theory.
no code implementations • 16 Mar 2023 • Antoine Bodin, Nicolas Macris
In this work, we present a new approach to analyze the gradient flow for a positive semi-definite matrix denoising problem in an extensive-rank and high-dimensional regime.
no code implementations • 13 Dec 2022 • Antoine Bodin, Nicolas Macris
Even the least-squares regression has shown atypical features such as the model-wise double descent, and further works have observed triple or multiple descents.
no code implementations • NeurIPS 2021 • Antoine Bodin, Nicolas Macris
A recent line of research has highlighted that random matrix tools can be used to obtain precise analytical asymptotics of the generalization (and training) errors of the random feature model.
no code implementations • 14 Sep 2021 • Jean Barbier, Nicolas Macris
We consider increasingly complex models of matrix denoising and dictionary learning in the Bayes-optimal setting, in the challenging regime where the matrices to infer have a rank growing linearly with the system size.
1 code implementation • 19 Jul 2021 • Farzad Pourkamali, Nicolas Macris
We consider the estimation of an n-dimensional vector s from the noisy element-wise measurements of $\mathbf{s}\mathbf{s}^T$, a generic problem that arises in statistics and machine learning.
no code implementations • 25 May 2021 • Antoine Bodin, Nicolas Macris
Explicit formulas for the whole time evolution of the overlap between the estimator and unknown vector, as well as the cost, are rigorously derived.
no code implementations • 9 Dec 2020 • Nicolas Macris, Raffaele Marino
The main idea is to construct a deep network which is trained from the samples of discrete stochastic differential equations underlying Kolmogorov's equation.
no code implementations • NeurIPS 2020 • Clément Luneau, Jean Barbier, Nicolas Macris
We consider generalized linear models in regimes where the number of nonzero components of the signal and accessible data points are sublinear with respect to the size of the signal.
no code implementations • NeurIPS 2020 • Jean Barbier, Nicolas Macris, Cynthia Rush
We determine statistical and computational limits for estimation of a rank-one matrix (the spike) corrupted by an additive gaussian noise matrix, in a sparse limit, where the underlying hidden vector (that constructs the rank-one matrix) has a number of non-zero components that scales sub-linearly with the total dimension of the vector, and the signal-to-noise ratio tends to infinity at an appropriate speed.
1 code implementation • 12 Dec 2019 • Elias Riedel Gårding, Nicolas Schwaller, Su Yeon Chang, Samuel Bosch, Willy Robert Laborde, Javier Naya Hernandez, Chun Lam Chan, Frédéric Gessler, Xinyu Si, Marc-André Dupertuis, Nicolas Macris
We propose the first correct special-purpose quantum circuits for preparation of Bell-diagonal states (BDS), and implement them on the IBM Quantum computer, characterizing and testing complex aspects of their quantum correlations in the full parameter space.
Quantum Physics Information Theory Information Theory
no code implementations • 12 Nov 2019 • Jean Barbier, Nicolas Macris
We consider statistical models of estimation of a rank-one matrix (the spike) corrupted by an additive gaussian noise matrix in the sparse limit.
Ranked #1 on Person Re-Identification on Market-1501 (Average-mAP metric)
no code implementations • 6 Dec 2018 • Jean Barbier, Mohamad Dia, Nicolas Macris, Florent Krzakala, Lenka Zdeborová
We characterize the detectability phase transitions in a large set of estimation problems, where we show that there exists a gap between what currently known polynomial algorithms (in particular spectral methods and approximate message-passing) can do and what is expected information theoretically.
1 code implementation • NeurIPS 2018 • Benjamin Aubin, Antoine Maillard, Jean Barbier, Florent Krzakala, Nicolas Macris, Lenka Zdeborová
Heuristic tools from statistical physics have been used in the past to locate the phase transitions and compute the optimal learning and generalization errors in the teacher-student scenario in multi-layer neural networks.
2 code implementations • NeurIPS 2018 • Marylou Gabrié, Andre Manoel, Clément Luneau, Jean Barbier, Nicolas Macris, Florent Krzakala, Lenka Zdeborová
We examine a class of deep learning models with a tractable method to compute information-theoretic quantities.
1 code implementation • 10 Aug 2017 • Jean Barbier, Florent Krzakala, Nicolas Macris, Léo Miolane, Lenka Zdeborová
Non-rigorous predictions for the optimal errors existed for special cases of GLMs, e. g. for the perceptron, in the field of statistical physics based on the so-called replica method.
no code implementations • NeurIPS 2016 • Jean Barbier, Mohamad Dia, Nicolas Macris, Florent Krzakala, Thibault Lesieur, Lenka Zdeborova
We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal.