1 code implementation • 31 May 2024 • Jean Barbier, Francesco Camilli, Marco Mondelli, Yizhou Xu
We consider a prototypical problem of Bayesian inference for a structured spiked model: a low-rank signal is corrupted by additive noise.
no code implementations • 11 Jul 2023 • Francesco Camilli, Daria Tieplova, Jean Barbier
We carry out an information-theoretical analysis of a two-layer neural network trained from input-output pairs generated by a teacher network with matching architecture, in overparametrized regimes.
no code implementations • 7 Feb 2023 • Teng Fu, Yuhao Liu, Jean Barbier, Marco Mondelli, Shansuo Liang, Tianqi Hou
We study the performance of a Bayesian statistician who estimates a rank-one signal corrupted by non-symmetric rotationally invariant noise with a generic distribution of singular values.
2 code implementations • 3 Oct 2022 • Jean Barbier, Francesco Camilli, Marco Mondelli, Manuel Saenz
To answer this, we study the paradigmatic spiked matrix model of principal components analysis (PCA), where a rank-one matrix is corrupted by additive noise.
no code implementations • 20 May 2022 • Jean Barbier, Tianqi Hou, Marco Mondelli, Manuel Sáenz
We consider the problem of estimating a rank-1 signal corrupted by structured rotationally invariant noise, and address the following question: how well do inference algorithms perform when the noise statistics is unknown and hence Gaussian noise is assumed?
no code implementations • 14 Sep 2021 • Jean Barbier, Nicolas Macris
We consider increasingly complex models of matrix denoising and dictionary learning in the Bayes-optimal setting, in the challenging regime where the matrices to infer have a rank growing linearly with the system size.
no code implementations • 14 Jul 2021 • Jean Barbier, Wei-Kuo Chen, Dmitry Panchenko, Manuel Sáenz
Here we consider a model in which the responses are corrupted by gaussian noise and are known to be generated as linear combinations of the covariates, but the distributions of the ground-truth regression coefficients and of the noise are unknown.
no code implementations • 28 Oct 2020 • Jean Barbier
In modern signal processing and machine learning, inference is done in very high dimension: very many unknown characteristics about the system have to be deduced from a lot of high-dimensional noisy data.
no code implementations • 27 Sep 2020 • Jean Barbier, Dmitry Panchenko, Manuel Sáenz
We consider a generic class of log-concave, possibly random, (Gibbs) measures.
no code implementations • NeurIPS 2020 • Clément Luneau, Jean Barbier, Nicolas Macris
We consider generalized linear models in regimes where the number of nonzero components of the signal and accessible data points are sublinear with respect to the size of the signal.
no code implementations • NeurIPS 2020 • Jean Barbier, Nicolas Macris, Cynthia Rush
We determine statistical and computational limits for estimation of a rank-one matrix (the spike) corrupted by an additive gaussian noise matrix, in a sparse limit, where the underlying hidden vector (that constructs the rank-one matrix) has a number of non-zero components that scales sub-linearly with the total dimension of the vector, and the signal-to-noise ratio tends to infinity at an appropriate speed.
no code implementations • 16 May 2020 • Jean Barbier, Galen Reeves
We consider a generalization of an important class of high-dimensional inference problems, namely spiked symmetric matrix models, often used as probabilistic models for principal component analysis.
no code implementations • 12 Nov 2019 • Jean Barbier, Nicolas Macris
We consider statistical models of estimation of a rank-one matrix (the spike) corrupted by an additive gaussian noise matrix in the sparse limit.
no code implementations • 15 Jul 2019 • Jean Barbier
We consider Bayesian inference of signals with vector-valued entries.
no code implementations • 4 Apr 2019 • Jean Barbier
We show that, under a proper perturbation, these models are replica symmetric in the sense that the overlap matrix concentrates.
Information Theory Disordered Systems and Neural Networks Information Theory Probability
no code implementations • 6 Dec 2018 • Jean Barbier, Mohamad Dia, Nicolas Macris, Florent Krzakala, Lenka Zdeborová
We characterize the detectability phase transitions in a large set of estimation problems, where we show that there exists a gap between what currently known polynomial algorithms (in particular spectral methods and approximate message-passing) can do and what is expected information theoretically.
1 code implementation • NeurIPS 2018 • Benjamin Aubin, Antoine Maillard, Jean Barbier, Florent Krzakala, Nicolas Macris, Lenka Zdeborová
Heuristic tools from statistical physics have been used in the past to locate the phase transitions and compute the optimal learning and generalization errors in the teacher-student scenario in multi-layer neural networks.
2 code implementations • NeurIPS 2018 • Marylou Gabrié, Andre Manoel, Clément Luneau, Jean Barbier, Nicolas Macris, Florent Krzakala, Lenka Zdeborová
We examine a class of deep learning models with a tractable method to compute information-theoretic quantities.
1 code implementation • 10 Aug 2017 • Jean Barbier, Florent Krzakala, Nicolas Macris, Léo Miolane, Lenka Zdeborová
Non-rigorous predictions for the optimal errors existed for special cases of GLMs, e. g. for the perceptron, in the field of statistical physics based on the so-called replica method.
no code implementations • NeurIPS 2016 • Jean Barbier, Mohamad Dia, Nicolas Macris, Florent Krzakala, Thibault Lesieur, Lenka Zdeborova
We also show that for a large set of parameters, an iterative algorithm called approximate message-passing is Bayes optimal.