no code implementations • 10 Dec 2023 • Yohann de Castro, Sébastien Gadat, Clément Marteau
This paper presents a novel algorithm that leverages Stochastic Gradient Descent strategies in conjunction with Random Features to augment the scalability of Conic Particle Gradient Descent (CPGD) specifically tailored for solving sparse optimisation problems on measures.
no code implementations • 8 Jan 2023 • Marelys Crespo Navas, Sébastien Gadat, Xavier Gendre
In this paper, we investigate a continuous time version of the Stochastic Langevin Monte Carlo method, introduced in [WT11], that incorporates a stochastic sampling step inside the traditional over-damped Langevin diffusion.
no code implementations • 10 Dec 2020 • Sébastien Gadat, Ioana Gavra
We adopt the point of view of stochastic algorithms and establish the almost sure convergence of these methods when using a decreasing step-size point of view towards the set of critical points of the target function.
no code implementations • 8 Oct 2020 • Sébastien Gadat, Fabien Panloup, Clément Pellegrini
To answer this question, we establish some quantitative statistical bounds related to the underlying Poincar\'e constant of the model and establish new results about the numerical approximation of Gibbs measures by Cesaro averages of Euler schemes of (over-damped) Langevin diffusions.
no code implementations • 23 Jul 2019 • Yohann de Castro, Sébastien Gadat, Clément Marteau, Cathy Maugis
This paper investigates the statistical estimation of a discrete mixing measure $\mu$0 involved in a kernel mixture model.
no code implementations • 14 Sep 2016 • Sébastien Gadat, Fabien Panloup, Sofiane Saadane
This paper deals with a natural stochastic optimization procedure derived from the so-called Heavy-ball method differential equation, which was introduced by Polyak in the 1960s with his seminal contribution [Pol64].
no code implementations • 17 Feb 2015 • Sébastien Gadat, Fabien Panloup, Sofiane Saadane
Narendra-Shapiro (NS) algorithms are bandit-type algorithms that have been introduced in the sixties (with a view to applications in Psychology or learning automata), whose convergence has been intensively studied in the stochastic algorithm literature.
no code implementations • 4 Nov 2014 • Sébastien Gadat, Thierry Klein, Clément Marteau
Given an $n$-sample of random vectors $(X_i, Y_i)_{1 \leq i \leq n}$ whose joint law is unknown, the long-standing problem of supervised classification aims to \textit{optimally} predict the label $Y$ of a given a new observation $X$.