no code implementations • 30 Apr 2024 • Jérôme Bolte, Tam Le, Éric Moulines, Edouard Pauwels
Motivated by the widespread use of approximate derivatives in machine learning and optimization, we study inexact subgradient methods with non-vanishing additive errors and step sizes.
no code implementations • 10 Jul 2023 • Arnaud Descours, Tom Huix, Arnaud Guillin, Manon Michel, Éric Moulines, Boris Nectoux
We provide a rigorous analysis of training by variational inference (VI) of Bayesian neural networks in the two-layer and infinite-width case.
no code implementations • 1 Jun 2023 • Louis Grenioux, Éric Moulines, Marylou Gabrié
Energy-based models (EBMs) are versatile density estimation models that directly parameterize an unnormalized log density.
1 code implementation • 9 Feb 2023 • Louis Grenioux, Alain Durmus, Éric Moulines, Marylou Gabrié
Transport maps can ease the sampling of distributions with non-trivial geometries by transforming them into distributions that are easier to handle.
no code implementations • 15 Feb 2021 • Alain Durmus, Pablo Jiménez, Éric Moulines, Salem Said
This result gives rise to a family of stationary distributions indexed by the step-size, which is further shown to converge to a Dirac measure, concentrated at the solution of the problem at hand, as the step-size goes to 0.
no code implementations • 18 Nov 2020 • Thomas Mesnard, Théophane Weber, Fabio Viola, Shantanu Thakoor, Alaa Saade, Anna Harutyunyan, Will Dabney, Tom Stepleton, Nicolas Heess, Arthur Guez, Éric Moulines, Marcus Hutter, Lars Buesing, Rémi Munos
Credit assignment in reinforcement learning is the problem of measuring an action's influence on future rewards.
no code implementations • 27 May 2020 • Alain Durmus, Pablo Jiménez, Éric Moulines, Salem Said, Hoi-To Wai
This paper analyzes the convergence for a large class of Riemannian stochastic approximation (SA) schemes, which aim at tackling stochastic optimization problems.
1 code implementation • 22 Jan 2020 • Nicolas Brosse, Carlos Riquelme, Alice Martin, Sylvain Gelly, Éric Moulines
Uncertainty quantification for deep learning is a challenging open problem.
no code implementations • 30 Oct 2019 • Anatoli Juditsky, Joon Kwon, Éric Moulines
We introduce and analyze a new family of first-order optimization algorithms which generalizes and unifies both mirror descent and dual averaging.
no code implementations • 30 May 2019 • Ngoc Huy Chau, Éric Moulines, Miklos Rásonyi, Sotirios Sabanis, Ying Zhang
We consider the problem of sampling from a target distribution, which is \emph {not necessarily logconcave}, in the context of empirical risk minimization and stochastic optimization as presented in Raginsky et al. (2017).
no code implementations • NeurIPS 2018 • Geneviève Robin, Hoi-To Wai, Julie Josse, Olga Klopp, Éric Moulines
In this paper, we introduce a low-rank interaction and sparse additive effects (LORIS) model which combines matrix regression on a dictionary and low-rank design, to estimate main effects and interactions simultaneously.