Search Results for author: Marc Mézard

Found 13 papers, 5 papers with code

Dynamical Regimes of Diffusion Models

no code implementations28 Feb 2024 Giulio Biroli, Tony Bonnaire, Valentin De Bortoli, Marc Mézard

Using statistical physics methods, we study generative diffusion models in the regime where the dimension of space and the number of data are large, and the score function has been trained optimally.

The Decimation Scheme for Symmetric Matrix Factorization

no code implementations31 Jul 2023 Francesco Camilli, Marc Mézard

Matrix factorization is an inference problem that has acquired importance due to its vast range of applications that go from dictionary learning to recommendation systems and machine learning with deep networks.

Dictionary Learning Recommendation Systems

Sparse Representations, Inference and Learning

no code implementations28 Jun 2023 Clarissa Lauditi, Emanuele Troiani, Marc Mézard

In recent years statistical physics has proven to be a valuable tool to probe into large dimensional inference problems such as the ones occurring in machine learning.

Matrix factorization with neural networks

no code implementations5 Dec 2022 Francesco Camilli, Marc Mézard

Matrix factorization is an important mathematical problem encountered in the context of dictionary learning, recommendation systems and machine learning.

Dictionary Learning Recommendation Systems

Learning curves of generic features maps for realistic datasets with a teacher-student model

1 code implementation NeurIPS 2021 Bruno Loureiro, Cédric Gerbelot, Hugo Cui, Sebastian Goldt, Florent Krzakala, Marc Mézard, Lenka Zdeborová

While still solvable in a closed form, this generalization is able to capture the learning curves for a broad range of realistic data sets, thus redeeming the potential of the teacher-student framework.

The Gaussian equivalence of generative models for learning with shallow neural networks

1 code implementation25 Jun 2020 Sebastian Goldt, Bruno Loureiro, Galen Reeves, Florent Krzakala, Marc Mézard, Lenka Zdeborová

Here, we go beyond this simple paradigm by studying the performance of neural networks trained on data drawn from pre-trained generative models.

BIG-bench Machine Learning

Generalisation error in learning with random features and the hidden manifold model

no code implementations ICML 2020 Federica Gerace, Bruno Loureiro, Florent Krzakala, Marc Mézard, Lenka Zdeborová

In particular, we show how to obtain analytically the so-called double descent behaviour for logistic regression with a peak at the interpolation threshold, we illustrate the superiority of orthogonal against random Gaussian projections in learning with random features, and discuss the role played by correlations in the data generated by the hidden manifold model.

regression valid

Modelling the influence of data structure on learning in neural networks: the hidden manifold model

1 code implementation25 Sep 2019 Sebastian Goldt, Marc Mézard, Florent Krzakala, Lenka Zdeborová

We demonstrate that learning of the hidden manifold model is amenable to an analytical treatment by proving a "Gaussian Equivalence Property" (GEP), and we use the GEP to show how the dynamics of two-layer neural networks trained using one-pass stochastic gradient descent is captured by a set of integro-differential equations that track the performance of the network at all times.

Generative Adversarial Network

Multi-Layer Generalized Linear Estimation

no code implementations24 Jan 2017 Andre Manoel, Florent Krzakala, Marc Mézard, Lenka Zdeborová

We consider the problem of reconstructing a signal from multi-layered (possibly) non-linear measurements.

Phase transitions and sample complexity in Bayes-optimal matrix factorization

no code implementations6 Feb 2014 Yoshiyuki Kabashima, Florent Krzakala, Marc Mézard, Ayaka Sakata, Lenka Zdeborová

We use the tools of statistical mechanics - the cavity and replica methods - to analyze the achievability and computational tractability of the inference problems in the setting of Bayes-optimal inference, which amounts to assuming that the two matrices have random independent elements generated from some known distribution, and this information is available to the inference algorithm.

blind source separation Dictionary Learning +2

Probabilistic Reconstruction in Compressed Sensing: Algorithms, Phase Diagrams, and Threshold Achieving Matrices

1 code implementation18 Jun 2012 Florent Krzakala, Marc Mézard, François Sausset, Yifan Sun, Lenka Zdeborová

We further develop the asymptotic analysis of the corresponding phase diagrams with and without measurement noise, for different distribution of signals, and discuss the best possible reconstruction performances regardless of the algorithm.

Statistical Mechanics Information Theory Information Theory

Statistical physics-based reconstruction in compressed sensing

1 code implementation20 Sep 2011 Florent Krzakala, Marc Mézard, François Sausset, Yifan Sun, Lenka Zdeborová

Compressed sensing is triggering a major evolution in signal acquisition.

Statistical Mechanics Information Theory Information Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.