Search Results for author: Éric Moulines

Found 10 papers, 2 papers with code

On Sampling with Approximate Transport Maps

1 code implementation9 Feb 2023 Louis Grenioux, Alain Durmus, Éric Moulines, Marylou Gabrié

Transport maps can ease the sampling of distributions with non-trivial geometries by transforming them into distributions that are easier to handle.

Low-rank Interaction with Sparse Additive Effects Model for Large Data Frames

no code implementations NeurIPS 2018 Geneviève Robin, Hoi-To Wai, Julie Josse, Olga Klopp, Éric Moulines

In this paper, we introduce a low-rank interaction and sparse additive effects (LORIS) model which combines matrix regression on a dictionary and low-rank design, to estimate main effects and interactions simultaneously.

Clustering Imputation

On stochastic gradient Langevin dynamics with dependent data streams: the fully non-convex case

no code implementations30 May 2019 Ngoc Huy Chau, Éric Moulines, Miklos Rásonyi, Sotirios Sabanis, Ying Zhang

We consider the problem of sampling from a target distribution, which is \emph {not necessarily logconcave}, in the context of empirical risk minimization and stochastic optimization as presented in Raginsky et al. (2017).

Stochastic Optimization

Unifying mirror descent and dual averaging

no code implementations30 Oct 2019 Anatoli Juditsky, Joon Kwon, Éric Moulines

We introduce and analyze a new family of first-order optimization algorithms which generalizes and unifies both mirror descent and dual averaging.

Convergence Analysis of Riemannian Stochastic Approximation Schemes

no code implementations27 May 2020 Alain Durmus, Pablo Jiménez, Éric Moulines, Salem Said, Hoi-To Wai

This paper analyzes the convergence for a large class of Riemannian stochastic approximation (SA) schemes, which aim at tackling stochastic optimization problems.

Stochastic Optimization

On Riemannian Stochastic Approximation Schemes with Fixed Step-Size

no code implementations15 Feb 2021 Alain Durmus, Pablo Jiménez, Éric Moulines, Salem Said

This result gives rise to a family of stationary distributions indexed by the step-size, which is further shown to converge to a Dirac measure, concentrated at the solution of the problem at hand, as the step-size goes to 0.

Balanced Training of Energy-Based Models with Adaptive Flow Sampling

no code implementations1 Jun 2023 Louis Grenioux, Éric Moulines, Marylou Gabrié

Energy-based models (EBMs) are versatile density estimation models that directly parameterize an unnormalized log density.

Density Estimation Variational Inference

Law of Large Numbers for Bayesian two-layer Neural Network trained with Variational Inference

no code implementations10 Jul 2023 Arnaud Descours, Tom Huix, Arnaud Guillin, Manon Michel, Éric Moulines, Boris Nectoux

We provide a rigorous analysis of training by variational inference (VI) of Bayesian neural networks in the two-layer and infinite-width case.

Variational Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.