Search Results for author: Eric Moulines

Found 64 papers, 11 papers with code

Online EM Algorithm for Latent Data Models

no code implementations27 Dec 2007 Olivier Cappé, Eric Moulines

The resulting algorithm is usually simpler and is shown to achieve convergence to the stationary points of the Kullback-Leibler divergence between the marginal distribution of the observation and the model distribution at the optimal rate, i. e., that of the maximum likelihood estimator.

Kernel Change-point Analysis

no code implementations NeurIPS 2008 Zaïd Harchaoui, Eric Moulines, Francis R. Bach

Change-point analysis of an (unlabelled) sample of observations consists in, first, testing whether a change in the distribution occurs within the sample, and second, if a change occurs, estimating the change-point instant after which the distribution of the observations switches from one distribution to another different distribution.

Two-sample testing

Non-Asymptotic Analysis of Stochastic Approximation Algorithms for Machine Learning

no code implementations NeurIPS 2011 Eric Moulines, Francis R. Bach

We consider the minimization of a convex objective function defined on a Hilbert space, which is only available through unbiased estimates of its gradients.

BIG-bench Machine Learning regression

Adaptive parallel tempering algorithm

2 code implementations4 May 2012 Blazej Miasojedow, Eric Moulines, Matti Vihola

Parallel tempering is a generic Markov chain Monte Carlo sampling method which allows good mixing with multimodal target distributions, where conventional Metropolis-Hastings algorithms often fail.

Computation

Non-strongly-convex smooth stochastic approximation with convergence rate O(1/n)

no code implementations NeurIPS 2013 Francis Bach, Eric Moulines

We consider the stochastic approximation problem where a convex function has to be minimized, given only the knowledge of unbiased estimates of its gradients at certain points, a framework which includes machine learning methods based on the minimization of the empirical risk.

BIG-bench Machine Learning regression

Adaptive Multinomial Matrix Completion

no code implementations26 Aug 2014 Olga Klopp, Jean Lafond, Eric Moulines, Joseph Salmon

The task of estimating a matrix given a sample of observed entries is known as the \emph{matrix completion problem}.

Matrix Completion Multi-class Classification +1

On the Online Frank-Wolfe Algorithms for Convex and Non-convex Optimizations

no code implementations5 Oct 2015 Jean Lafond, Hoi-To Wai, Eric Moulines

With a strongly convex stochastic cost and when the optimal solution lies in the interior of the constraint set or the constraint set is a polytope, the regret bound and anytime optimality are shown to be ${\cal O}( \log^3 T / T )$ and ${\cal O}( \log^2 T / T)$, respectively, where $T$ is the number of rounds played.

High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm

no code implementations5 May 2016 Alain Durmus, Eric Moulines

We consider in this paper the problem of sampling a high-dimensional probability distribution $\pi$ having a density with respect to the Lebesgue measure on $\mathbb{R}^d$, known up to a normalization constant $x \mapsto \pi(x)= \mathrm{e}^{-U(x)}/\int_{\mathbb{R}^d} \mathrm{e}^{-U(y)} \mathrm{d} y$.

Bayesian Inference Vocal Bursts Intensity Prediction

Stochastic Gradient Richardson-Romberg Markov Chain Monte Carlo

no code implementations NeurIPS 2016 Alain Durmus, Umut Simsekli, Eric Moulines, Roland Badeau, Gaël Richard

We illustrate our framework on the popular Stochastic Gradient Langevin Dynamics (SGLD) algorithm and propose a novel SG-MCMC algorithm referred to as Stochastic Gradient Richardson-Romberg Langevin Dynamics (SGRRLD).

Bayesian Inference

Decentralized Frank-Wolfe Algorithm for Convex and Non-convex Problems

no code implementations5 Dec 2016 Hoi-To Wai, Jean Lafond, Anna Scaglione, Eric Moulines

The convergence of the proposed algorithm is studied by viewing the decentralized algorithm as an inexact FW algorithm.

Matrix Completion Sparse Learning

The promises and pitfalls of Stochastic Gradient Langevin Dynamics

no code implementations NeurIPS 2018 Nicolas Brosse, Alain Durmus, Eric Moulines

As $N$ becomes large, we show that the SGLD algorithm has an invariant probability measure which significantly departs from the target posterior and behaves like Stochastic Gradient Descent (SGD).

Non-asymptotic Analysis of Biased Stochastic Approximation Scheme

no code implementations2 Feb 2019 Belhal Karimi, Blazej Miasojedow, Eric Moulines, Hoi-To Wai

We illustrate these settings with the online EM algorithm and the policy-gradient method for average reward maximization in reinforcement learning.

reinforcement-learning Reinforcement Learning (RL)

On the Global Convergence of (Fast) Incremental Expectation Maximization Methods

no code implementations NeurIPS 2019 Belhal Karimi, Hoi-To Wai, Eric Moulines, Marc Lavielle

To alleviate this problem, Neal and Hinton have proposed an incremental version of the EM (iEM) in which at each iteration the conditional expectation of the latent data (E-step) is updated only for a mini-batch of observations.

Finite Time Analysis of Linear Two-timescale Stochastic Approximation with Markovian Noise

no code implementations4 Feb 2020 Maxim Kaledin, Eric Moulines, Alexey Naumov, Vladislav Tadic, Hoi-To Wai

Our bounds show that there is no discrepancy in the convergence rate between Markovian and martingale noise, only the constants are affected by the mixing time of the Markov chain.

Reinforcement Learning (RL)

Geom-SPIDER-EM: Faster Variance Reduced Stochastic Expectation Maximization for Nonconvex Finite-Sum Optimization

no code implementations24 Nov 2020 Gersende Fort, Eric Moulines, Hoi-To Wai

The Expectation Maximization (EM) algorithm is a key reference for inference in latent variable models; unfortunately, its computational cost is prohibitive in the large scale learning setting.

A Stochastic Path-Integrated Differential EstimatoR Expectation Maximization Algorithm

no code implementations30 Nov 2020 Gersende Fort, Eric Moulines, Hoi-To Wai

The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations.

A Stochastic Path Integral Differential EstimatoR Expectation Maximization Algorithm

no code implementations NeurIPS 2020 Gersende Fort, Eric Moulines, Hoi-To Wai

The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations.

Nonreversible MCMC from conditional invertible transforms: a complete recipe with convergence guarantees

no code implementations31 Dec 2020 Achille Thin, Nikita Kotelevskii, Christophe Andrieu, Alain Durmus, Eric Moulines, Maxim Panov

This paper fills the gap by developing general tools to ensure that a class of nonreversible Markov kernels, possibly relying on complex transforms, has the desired invariance property and leads to convergent algorithms.

MISSO: Minimization by Incremental Stochastic Surrogate Optimization for Large Scale Nonconvex and Nonsmooth Problems

no code implementations1 Jan 2021 Belhal Karimi, Hoi To Wai, Eric Moulines, Ping Li

Many constrained, nonconvex and nonsmooth optimization problems can be tackled using the majorization-minimization (MM) method which alternates between constructing a surrogate function which upper bounds the objective function, and then minimizing this surrogate.

Variational Inference

Rates of convergence for density estimation with generative adversarial networks

no code implementations30 Jan 2021 Nikita Puchkin, Sergey Samsonov, Denis Belomestny, Eric Moulines, Alexey Naumov

In this work we undertake a thorough study of the non-asymptotic properties of the vanilla generative adversarial networks (GANs).

Density Estimation

NEO: Non Equilibrium Sampling on the Orbit of a Deterministic Transform

1 code implementation17 Mar 2021 Achille Thin, Yazid Janati, Sylvain Le Corff, Charles Ollion, Arnaud Doucet, Alain Durmus, Eric Moulines, Christian Robert

Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant Z are challenging problems.

The Perturbed Prox-Preconditioned SPIDER algorithm for EM-based large scale learning

no code implementations25 May 2021 Gersende Fort, Eric Moulines

Incremental Expectation Maximization (EM) algorithms were introduced to design EM for the large scale learning framework by avoiding the full data set to be processed at each iteration.

QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

no code implementations1 Jun 2021 Maxime Vono, Vincent Plassier, Alain Durmus, Aymeric Dieuleveut, Eric Moulines

The objective of Federated Learning (FL) is to perform statistical inference for data which are decentralised and stored locally on networked clients.

Federated Learning

Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize

no code implementations NeurIPS 2021 Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, Kevin Scaman, Hoi-To Wai

This family of methods arises in many machine learning tasks and is used to obtain approximate solutions of a linear system $\bar{A}\theta = \bar{b}$ for which $\bar{A}$ and $\bar{b}$ can only be accessed through random estimates $\{({\bf A}_n, {\bf b}_n): n \in \mathbb{N}^*\}$.

Vocal Bursts Intensity Prediction

DG-LMC: A Turn-key and Scalable Synchronous Distributed MCMC Algorithm via Langevin Monte Carlo within Gibbs

no code implementations11 Jun 2021 Vincent Plassier, Maxime Vono, Alain Durmus, Eric Moulines

Performing reliable Bayesian inference on a big data scale is becoming a keystone in the modern era of machine learning.

Bayesian Inference

Monte Carlo Variational Auto-Encoders

2 code implementations30 Jun 2021 Achille Thin, Nikita Kotelevskii, Arnaud Doucet, Alain Durmus, Eric Moulines, Maxim Panov

Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO).

Local-Global MCMC kernels: the best of both worlds

1 code implementation4 Nov 2021 Sergey Samsonov, Evgeny Lagutin, Marylou Gabrié, Alain Durmus, Alexey Naumov, Eric Moulines

Recent works leveraging learning to enhance sampling have shown promising results, in particular by designing effective non-local moves and global proposals.

NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform

1 code implementation NeurIPS 2021 Achille Thin, Yazid Janati El Idrissi, Sylvain Le Corff, Charles Ollion, Eric Moulines, Arnaud Doucet, Alain Durmus, Christian Robert

Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant $\mathrm{Z}$ are challenging problems.

Diffusion bridges vector quantized Variational AutoEncoders

1 code implementation10 Feb 2022 Max Cohen, Guillaume Quispe, Sylvain Le Corff, Charles Ollion, Eric Moulines

In this work, we propose a new model to train the prior and the encoder/decoder networks simultaneously.

From Dirichlet to Rubin: Optimistic Exploration in RL without Bonuses

no code implementations16 May 2022 Daniil Tiapkin, Denis Belomestny, Eric Moulines, Alexey Naumov, Sergey Samsonov, Yunhao Tang, Michal Valko, Pierre Menard

We propose the Bayes-UCBVI algorithm for reinforcement learning in tabular, stage-dependent, episodic Markov decision process: a natural extension of the Bayes-UCB algorithm by Kaufmann et al. (2012) for multi-armed bandits.

Multi-Armed Bandits

FedPop: A Bayesian Approach for Personalised Federated Learning

no code implementations7 Jun 2022 Nikita Kotelevskii, Maxime Vono, Eric Moulines, Alain Durmus

We provide non-asymptotic convergence guarantees for the proposed algorithms and illustrate their performances on various personalised federated learning tasks.

Federated Learning Uncertainty Quantification

Variational Inference of overparameterized Bayesian Neural Networks: a theoretical and empirical study

1 code implementation8 Jul 2022 Tom Huix, Szymon Majewski, Alain Durmus, Eric Moulines, Anna Korba

This paper studies the Variational Inference (VI) used for training Bayesian Neural Networks (BNN) in the overparameterized regime, i. e., when the number of neurons tends to infinity.

Variational Inference

Finite-time High-probability Bounds for Polyak-Ruppert Averaged Iterates of Linear Stochastic Approximation

no code implementations10 Jul 2022 Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov

Our finite-time instance-dependent bounds for the averaged LSA iterates are sharp in the sense that the leading term we obtain coincides with the local asymptotic minimax limit.

BR-SNIS: Bias Reduced Self-Normalized Importance Sampling

1 code implementation13 Jul 2022 Gabriel Cardoso, Sergey Samsonov, Achille Thin, Eric Moulines, Jimmy Olsson

This method is a wrapper in the sense that it uses the same proposal samples and importance weights as SNIS, but makes clever use of iterated sampling--importance resampling (ISIR) to form a bias-reduced version of the estimator.

Optimistic Posterior Sampling for Reinforcement Learning with Few Samples and Tight Guarantees

1 code implementation28 Sep 2022 Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Mark Rowland, Michal Valko, Pierre Menard

We consider reinforcement learning in an environment modeled by an episodic, finite, stage-dependent Markov decision process of horizon $H$ with $S$ states, and $A$ actions.

reinforcement-learning Reinforcement Learning (RL)

AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks

no code implementations7 Nov 2022 Louis Leconte, Sholom Schechtman, Eric Moulines

First, we formulate the training of quantized neural networks (QNNs) as a smoothed sequence of interval-constrained optimization problems.

Stochastic Variable Metric Proximal Gradient with variance reduction for non-convex composite optimization

no code implementations2 Jan 2023 Gersende Fort, Eric Moulines

This paper introduces a novel algorithm, the Perturbed Proximal Preconditioned SPIDER algorithm (3P-SPIDER), designed to solve finite sum non-convex composite optimization.

State and parameter learning with PaRIS particle Gibbs

no code implementations2 Jan 2023 Gabriel Cardoso, Yazid Janati El Idrissi, Sylvain Le Corff, Eric Moulines, Jimmy Olsson

The particle-based, rapid incremental smoother PaRIS is a sequential Monte Carlo (SMC) technique allowing for efficient online approximation of expectations of additive functionals under the smoothing distribution in these models.

Stochastic Approximation Beyond Gradient for Signal Processing and Machine Learning

no code implementations22 Feb 2023 Aymeric Dieuleveut, Gersende Fort, Eric Moulines, Hoi-To Wai

Stochastic Approximation (SA) is a classical algorithm that has had since the early days a huge impact on signal processing, and nowadays on machine learning, due to the necessity to deal with a large amount of data observed with uncertainties.

Rosenthal-type inequalities for linear statistics of Markov chains

no code implementations10 Mar 2023 Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, Marina Sheshukova

In this paper, we establish novel deviation bounds for additive functionals of geometrically ergodic Markov chains similar to Rosenthal and Bernstein inequalities for sums of independent random variables.

Vocal Bursts Type Prediction

Fast Rates for Maximum Entropy Exploration

1 code implementation14 Mar 2023 Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Pierre Perrault, Yunhao Tang, Michal Valko, Pierre Menard

Finally, we apply developed regularization techniques to reduce sample complexity of visitation entropy maximization to $\widetilde{\mathcal{O}}(H^2SA/\varepsilon^2)$, yielding a statistical separation between maximum entropy exploration and reward-free exploration.

Reinforcement Learning (RL)

Restarted Bayesian Online Change-point Detection for Non-Stationary Markov Decision Processes

no code implementations1 Apr 2023 REDA ALAMI, Mohammed Mahfoud, Eric Moulines

We consider the problem of learning in a non-stationary reinforcement learning (RL) environment, where the setting can be fully described by a piecewise stationary discrete-time Markov decision process (MDP).

Change Point Detection Reinforcement Learning (RL)

One-Step Distributional Reinforcement Learning

no code implementations27 Apr 2023 Mastane Achab, REDA ALAMI, Yasser Abdelaziz Dahou Djilali, Kirill Fedyanin, Eric Moulines

Reinforcement learning (RL) allows an agent interacting sequentially with an environment to maximize its long-term expected return.

Distributional Reinforcement Learning reinforcement-learning +1

FAVANO: Federated AVeraging with Asynchronous NOdes

no code implementations25 May 2023 Louis Leconte, Van Minh Nguyen, Eric Moulines

In this paper, we propose a novel centralized Asynchronous Federated Learning (FL) framework, FAVANO, for training Deep Neural Networks (DNNs) in resource-constrained environments.

Federated Learning

Conformal Prediction for Federated Uncertainty Quantification Under Label Shift

no code implementations8 Jun 2023 Vincent Plassier, Mehdi Makni, Aleksandr Rubashevskii, Eric Moulines, Maxim Panov

Federated Learning (FL) is a machine learning framework where many clients collaboratively train models while keeping the training data decentralized.

Conformal Prediction Federated Learning +2

Monte Carlo guided Diffusion for Bayesian linear inverse problems

1 code implementation15 Aug 2023 Gabriel Cardoso, Yazid Janati El Idrissi, Sylvain Le Corff, Eric Moulines

Ill-posed linear inverse problems arise frequently in various applications, from computational photography to medical imaging.

Bayesian Inference

Finite-Sample Analysis of the Temporal Difference Learning

no code implementations22 Oct 2023 Sergey Samsonov, Daniil Tiapkin, Alexey Naumov, Eric Moulines

In this paper we consider the problem of obtaining sharp bounds for the performance of temporal difference (TD) methods with linear functional approximation for policy evaluation in discounted Markov Decision Processes.

Demonstration-Regularized RL

no code implementations26 Oct 2023 Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Alexey Naumov, Pierre Perrault, Michal Valko, Pierre Menard

In particular, we study the demonstration-regularized reinforcement learning that leverages the expert demonstrations by KL-regularization for a policy learned by behavior cloning.

reinforcement-learning Reinforcement Learning (RL)

Bayesian ECG reconstruction using denoising diffusion generative models

no code implementations18 Dec 2023 Gabriel V. Cardoso, Lisa Bedin, Josselin Duchateau, Rémi Dubois, Eric Moulines

In this work, we propose a denoising diffusion generative model (DDGM) trained with healthy electrocardiogram (ECG) data that focuses on ECG morphology and inter-lead dependence.

Denoising

Incentivized Learning in Principal-Agent Bandit Games

no code implementations6 Mar 2024 Antoine Scheid, Daniil Tiapkin, Etienne Boursier, Aymeric Capitaine, El Mahdi El Mhamdi, Eric Moulines, Michael I. Jordan, Alain Durmus

This work considers a repeated principal-agent bandit game, where the principal can only interact with her environment through the agent.

Divide-and-Conquer Posterior Sampling for Denoising Diffusion Priors

no code implementations18 Mar 2024 Yazid Janati, Alain Durmus, Eric Moulines, Jimmy Olsson

In this work, we take a different approach and utilize the specific structure of the DDM prior to define a set of intermediate and simpler posterior sampling problems, resulting in a lower approximation error compared to previous methods.

Denoising Image Restoration

Fast and Consistent Learning of Hidden Markov Models by Incorporating Non-Consecutive Correlations

no code implementations ICML 2020 Robert Mattila, Cristian Rojas, Eric Moulines, Vikram Krishnamurthy, Bo Wahlberg

Can the parameters of a hidden Markov model (HMM) be estimated from a single sweep through the observations -- and additionally, without being trapped at a local optimum in the likelihood surface?

Time Series Time Series Analysis

Cannot find the paper you are looking for? You can Submit a new open access paper.