no code implementations • 22 Feb 2023 • Aymeric Dieuleveut, Gersende Fort, Eric Moulines, Hoi-To Wai
Stochastic Approximation (SA) is a classical algorithm that has had since the early days a huge impact on signal processing, and nowadays on machine learning, due to the necessity to deal with a large amount of data observed with uncertainties.
no code implementations • 2 Jan 2023 • Gersende Fort, Eric Moulines
This paper introduces a novel algorithm, the Perturbed Proximal Preconditioned SPIDER algorithm (3P-SPIDER), designed to solve finite sum non-convex composite optimization.
1 code implementation • 17 Mar 2022 • Gersende Fort, Barbara Pascal, Patrice Abry, Nelly Pustelnik
The originality of the devised algorithms stems from combining a Langevin Monte Carlo sampling scheme with Proximal operators.
no code implementations • 11 Feb 2022 • Patrice Abry, Gersende Fort, Barbara Pascal, Nelly Pustelnik
Yet, the assessment of the pandemic intensity within the pandemic period remains a challenging task because of the limited quality of data made available by public health authorities (missing data, outliers and pseudoseasonalities, notably), that calls for cumbersome and ad-hoc preprocessing (denoising) prior to estimation.
no code implementations • NeurIPS 2021 • Aymeric Dieuleveut, Gersende Fort, Eric Moulines, Geneviève Robin
The Expectation Maximization (EM) algorithm is the default algorithm for inference in latent variable models.
no code implementations • 3 Nov 2021 • Aymeric Dieuleveut, Gersende Fort, Eric Moulines, Geneviève Robin
The Expectation Maximization (EM) algorithm is the default algorithm for inference in latent variable models.
no code implementations • 25 May 2021 • Gersende Fort, E Moulines
A novel algorithm named Perturbed Prox-Preconditioned SPIDER (3P-SPIDER) is introduced.
no code implementations • 25 May 2021 • Gersende Fort, Eric Moulines
Incremental Expectation Maximization (EM) algorithms were introduced to design EM for the large scale learning framework by avoiding the full data set to be processed at each iteration.
no code implementations • 29 Dec 2020 • Gersende Fort, P. Gach, E. Moulines
Second, for the $n^{2/3}$-rate, the numerical illustrations show that thanks to an optimized choice of the step size and of the bounds in terms of quantities characterizing the optimization problem at hand, our results desig a less conservative choice of the step size and provide a better control of the convergence in expectation.
no code implementations • NeurIPS 2020 • Gersende Fort, Eric Moulines, Hoi-To Wai
The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations.
no code implementations • 30 Nov 2020 • Gersende Fort, Eric Moulines, Hoi-To Wai
The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations.
no code implementations • 24 Nov 2020 • Gersende Fort, Eric Moulines, Hoi-To Wai
The Expectation Maximization (EM) algorithm is a key reference for inference in latent variable models; unfortunately, its computational cost is prohibitive in the large scale learning setting.