no code implementations • 4 Mar 2024 • Tony Bonnaire, Giulio Biroli, Chiara Cammarota
Through both theoretical analysis and numerical experiments, we show that in practical cases, i. e. for finite but even very large $N$, successful optimization via gradient descent in phase retrieval is achieved by falling towards the good minima before reaching the bad ones.
no code implementations • 28 Feb 2024 • Giulio Biroli, Tony Bonnaire, Valentin De Bortoli, Marc Mézard
Using statistical physics methods, we study generative diffusion models in the regime where the dimension of space and the number of data are large, and the score function has been trained optimally.
1 code implementation • 16 Jun 2021 • Tony Bonnaire, Aurélien Decelle, Nabila Aghanim
A regularized version of Mixture Models is proposed to learn a principal graph from a distribution of $D$-dimensional data points.