Search Results for author: Sébastien Gadat

Found 8 papers, 0 papers with code

FastPart: Over-Parameterized Stochastic Gradient Descent for Sparse optimisation on Measures

no code implementations10 Dec 2023 Yohann de Castro, Sébastien Gadat, Clément Marteau

This paper presents a novel algorithm that leverages Stochastic Gradient Descent strategies in conjunction with Random Features to augment the scalability of Conic Particle Gradient Descent (CPGD) specifically tailored for solving sparse optimisation problems on measures.

Mathematical Proofs

Stochastic Langevin Monte Carlo for (weakly) log-concave posterior distributions

no code implementations8 Jan 2023 Marelys Crespo Navas, Sébastien Gadat, Xavier Gendre

In this paper, we investigate a continuous time version of the Stochastic Langevin Monte Carlo method, introduced in [WT11], that incorporates a stochastic sampling step inside the traditional over-damped Langevin diffusion.

Asymptotic study of stochastic adaptive algorithm in non-convex landscape

no code implementations10 Dec 2020 Sébastien Gadat, Ioana Gavra

We adopt the point of view of stochastic algorithms and establish the almost sure convergence of these methods when using a decreasing step-size point of view towards the set of critical points of the target function.

On the cost of Bayesian posterior mean strategy for log-concave models

no code implementations8 Oct 2020 Sébastien Gadat, Fabien Panloup, Clément Pellegrini

To answer this question, we establish some quantitative statistical bounds related to the underlying Poincar\'e constant of the model and establish new results about the numerical approximation of Gibbs measures by Cesaro averages of Euler schemes of (over-damped) Langevin diffusions.

SuperMix: Sparse Regularization for Mixtures

no code implementations23 Jul 2019 Yohann de Castro, Sébastien Gadat, Clément Marteau, Cathy Maugis

This paper investigates the statistical estimation of a discrete mixing measure $\mu$0 involved in a kernel mixture model.

Stochastic Heavy Ball

no code implementations14 Sep 2016 Sébastien Gadat, Fabien Panloup, Sofiane Saadane

This paper deals with a natural stochastic optimization procedure derived from the so-called Heavy-ball method differential equation, which was introduced by Polyak in the 1960s with his seminal contribution [Pol64].

Second-order methods Stochastic Optimization

Regret bounds for Narendra-Shapiro bandit algorithms

no code implementations17 Feb 2015 Sébastien Gadat, Fabien Panloup, Sofiane Saadane

Narendra-Shapiro (NS) algorithms are bandit-type algorithms that have been introduced in the sixties (with a view to applications in Psychology or learning automata), whose convergence has been intensively studied in the stochastic algorithm literature.

Classification with the nearest neighbor rule in general finite dimensional spaces: necessary and sufficient conditions

no code implementations4 Nov 2014 Sébastien Gadat, Thierry Klein, Clément Marteau

Given an $n$-sample of random vectors $(X_i, Y_i)_{1 \leq i \leq n}$ whose joint law is unknown, the long-standing problem of supervised classification aims to \textit{optimally} predict the label $Y$ of a given a new observation $X$.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.