Search Results for author: Gersende Fort

Found 12 papers, 1 papers with code

Stochastic Approximation Beyond Gradient for Signal Processing and Machine Learning

no code implementations22 Feb 2023 Aymeric Dieuleveut, Gersende Fort, Eric Moulines, Hoi-To Wai

Stochastic Approximation (SA) is a classical algorithm that has had since the early days a huge impact on signal processing, and nowadays on machine learning, due to the necessity to deal with a large amount of data observed with uncertainties.

Stochastic Variable Metric Proximal Gradient with variance reduction for non-convex composite optimization

no code implementations2 Jan 2023 Gersende Fort, Eric Moulines

This paper introduces a novel algorithm, the Perturbed Proximal Preconditioned SPIDER algorithm (3P-SPIDER), designed to solve finite sum non-convex composite optimization.

Covid19 Reproduction Number: Credibility Intervals by Blockwise Proximal Monte Carlo Samplers

1 code implementation17 Mar 2022 Gersende Fort, Barbara Pascal, Patrice Abry, Nelly Pustelnik

The originality of the devised algorithms stems from combining a Langevin Monte Carlo sampling scheme with Proximal operators.

Temporal evolution of the Covid19 pandemic reproduction number: Estimations from proximal optimization to Monte Carlo sampling

no code implementations11 Feb 2022 Patrice Abry, Gersende Fort, Barbara Pascal, Nelly Pustelnik

Yet, the assessment of the pandemic intensity within the pandemic period remains a challenging task because of the limited quality of data made available by public health authorities (missing data, outliers and pseudoseasonalities, notably), that calls for cumbersome and ad-hoc preprocessing (denoising) prior to estimation.

Denoising

The perturbed prox-preconditioned spider algorithm: non-asymptotic convergence bounds

no code implementations25 May 2021 Gersende Fort, E Moulines

A novel algorithm named Perturbed Prox-Preconditioned SPIDER (3P-SPIDER) is introduced.

The Perturbed Prox-Preconditioned SPIDER algorithm for EM-based large scale learning

no code implementations25 May 2021 Gersende Fort, Eric Moulines

Incremental Expectation Maximization (EM) algorithms were introduced to design EM for the large scale learning framework by avoiding the full data set to be processed at each iteration.

Fast Incremental Expectation Maximization for finite-sum optimization: nonasymptotic convergence

no code implementations29 Dec 2020 Gersende Fort, P. Gach, E. Moulines

Second, for the $n^{2/3}$-rate, the numerical illustrations show that thanks to an optimized choice of the step size and of the bounds in terms of quantities characterizing the optimization problem at hand, our results desig a less conservative choice of the step size and provide a better control of the convergence in expectation.

A Stochastic Path Integral Differential EstimatoR Expectation Maximization Algorithm

no code implementations NeurIPS 2020 Gersende Fort, Eric Moulines, Hoi-To Wai

The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations.

A Stochastic Path-Integrated Differential EstimatoR Expectation Maximization Algorithm

no code implementations30 Nov 2020 Gersende Fort, Eric Moulines, Hoi-To Wai

The Expectation Maximization (EM) algorithm is of key importance for inference in latent variable models including mixture of regressors and experts, missing observations.

Geom-SPIDER-EM: Faster Variance Reduced Stochastic Expectation Maximization for Nonconvex Finite-Sum Optimization

no code implementations24 Nov 2020 Gersende Fort, Eric Moulines, Hoi-To Wai

The Expectation Maximization (EM) algorithm is a key reference for inference in latent variable models; unfortunately, its computational cost is prohibitive in the large scale learning setting.

Cannot find the paper you are looking for? You can Submit a new open access paper.