Search Results for author: Alain Durmus

Found 47 papers, 15 papers with code

Sliced-Wasserstein Flows: Nonparametric Generative Modeling via Optimal Transport and Diffusions

1 code implementation21 Jun 2018 Antoine Liutkus, Umut Şimşekli, Szymon Majewski, Alain Durmus, Fabian-Robert Stöter

To the best of our knowledge, the proposed algorithm is the first nonparametric IGM algorithm with explicit theoretical guarantees.

Monte Carlo Variational Auto-Encoders

2 code implementations30 Jun 2021 Achille Thin, Nikita Kotelevskii, Arnaud Doucet, Alain Durmus, Eric Moulines, Maxim Panov

Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO).

Approximate Bayesian Computation with the Sliced-Wasserstein Distance

1 code implementation28 Oct 2019 Kimia Nadjahi, Valentin De Bortoli, Alain Durmus, Roland Badeau, Umut Şimşekli

Approximate Bayesian Computation (ABC) is a popular method for approximate inference in generative models with intractable but easy-to-sample likelihood.

Image Denoising

Local-Global MCMC kernels: the best of both worlds

1 code implementation4 Nov 2021 Sergey Samsonov, Evgeny Lagutin, Marylou Gabrié, Alain Durmus, Alexey Naumov, Eric Moulines

Recent works leveraging learning to enhance sampling have shown promising results, in particular by designing effective non-local moves and global proposals.

Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections

2 code implementations NeurIPS 2021 Kimia Nadjahi, Alain Durmus, Pierre E. Jacob, Roland Badeau, Umut Şimşekli

The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits.

On Sampling with Approximate Transport Maps

1 code implementation9 Feb 2023 Louis Grenioux, Alain Durmus, Éric Moulines, Marylou Gabrié

Transport maps can ease the sampling of distributions with non-trivial geometries by transforming them into distributions that are easier to handle.

Tree-Based Diffusion Schrödinger Bridge with Applications to Wasserstein Barycenters

1 code implementation NeurIPS 2023 Maxence Noble, Valentin De Bortoli, Arnaud Doucet, Alain Durmus

In this paper, we consider an entropic version of mOT with a tree-structured quadratic cost, i. e., a function that can be written as a sum of pairwise cost functions between the nodes of a tree.

Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance

1 code implementation NeurIPS 2019 Kimia Nadjahi, Alain Durmus, Umut Şimşekli, Roland Badeau

Minimum expected distance estimation (MEDE) algorithms have been widely used for probabilistic models with intractable likelihood functions and they have become increasingly popular due to their use in implicit generative modeling (e. g. Wasserstein generative adversarial networks, Wasserstein autoencoders).

Copula-like Variational Inference

1 code implementation NeurIPS 2019 Marcel Hirt, Petros Dellaportas, Alain Durmus

This family is based on new copula-like densities on the hypercube with non-uniform marginals which can be sampled efficiently, i. e. with a complexity linear in the dimension of state space.

Variational Inference

Maximum likelihood estimation of regularisation parameters in high-dimensional inverse problems: an empirical Bayesian approach. Part I: Methodology and Experiments

1 code implementation26 Nov 2019 Ana F. Vidal, Valentin De Bortoli, Marcelo Pereyra, Alain Durmus

In this work, we propose a general empirical Bayesian method for setting regularisation parameters in imaging problems that are convex w. r. t.

Methodology Computation 62C12, 65C40, 68U10, 62F15, 65J20, 65C60, 65J22

Variational Inference of overparameterized Bayesian Neural Networks: a theoretical and empirical study

1 code implementation8 Jul 2022 Tom Huix, Szymon Majewski, Alain Durmus, Eric Moulines, Anna Korba

This paper studies the Variational Inference (VI) used for training Bayesian Neural Networks (BNN) in the overparameterized regime, i. e., when the number of neurons tends to infinity.

Variational Inference

Statistical and Topological Properties of Sliced Probability Divergences

1 code implementation NeurIPS 2020 Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, Umut Şimşekli

The idea of slicing divergences has been proven to be successful when comparing two probability measures in various machine learning applications including generative modeling, and consists in computing the expected value of a `base divergence' between one-dimensional random projections of the two measures.

Bridging the Gap between Constant Step Size Stochastic Gradient Descent and Markov Chains

no code implementations20 Jul 2017 Aymeric Dieuleveut, Alain Durmus, Francis Bach

We consider the minimization of an objective function given access to unbiased estimates of its gradient through stochastic gradient descent (SGD) with constant step-size.

High-dimensional Bayesian inference via the Unadjusted Langevin Algorithm

no code implementations5 May 2016 Alain Durmus, Eric Moulines

We consider in this paper the problem of sampling a high-dimensional probability distribution $\pi$ having a density with respect to the Lebesgue measure on $\mathbb{R}^d$, known up to a normalization constant $x \mapsto \pi(x)= \mathrm{e}^{-U(x)}/\int_{\mathbb{R}^d} \mathrm{e}^{-U(y)} \mathrm{d} y$.

Bayesian Inference Vocal Bursts Intensity Prediction

The promises and pitfalls of Stochastic Gradient Langevin Dynamics

no code implementations NeurIPS 2018 Nicolas Brosse, Alain Durmus, Eric Moulines

As $N$ becomes large, we show that the SGLD algorithm has an invariant probability measure which significantly departs from the target posterior and behaves like Stochastic Gradient Descent (SGD).

Stochastic Gradient Richardson-Romberg Markov Chain Monte Carlo

no code implementations NeurIPS 2016 Alain Durmus, Umut Simsekli, Eric Moulines, Roland Badeau, Gaël Richard

We illustrate our framework on the popular Stochastic Gradient Langevin Dynamics (SGLD) algorithm and propose a novel SG-MCMC algorithm referred to as Stochastic Gradient Richardson-Romberg Langevin Dynamics (SGRRLD).

Bayesian Inference

Markov Decision Process for MOOC users behavioral inference

no code implementations10 Jul 2019 Firas Jarboui, Célya Gruson-daniel, Pierre Chanial, Alain Durmus, Vincent Rocchisani, Sophie-helene Goulet Ebongue, Anneliese Depoux, Wilfried Kirschenmann, Vianney Perchet

Studies on massive open online courses (MOOCs) users discuss the existence of typical profiles and their impact on the learning process of the students.

Convergence rates and approximation results for SGD and its continuous-time counterpart

no code implementations8 Apr 2020 Xavier Fontaine, Valentin De Bortoli, Alain Durmus

This paper proposes a thorough theoretical analysis of Stochastic Gradient Descent (SGD) with non-increasing step sizes.

Stochastic Optimization

Convergence Analysis of Riemannian Stochastic Approximation Schemes

no code implementations27 May 2020 Alain Durmus, Pablo Jiménez, Éric Moulines, Salem Said, Hoi-To Wai

This paper analyzes the convergence for a large class of Riemannian stochastic approximation (SA) schemes, which aim at tackling stochastic optimization problems.

Stochastic Optimization

Quantitative Propagation of Chaos for SGD in Wide Neural Networks

no code implementations NeurIPS 2020 Valentin De Bortoli, Alain Durmus, Xavier Fontaine, Umut Simsekli

In comparison to previous works on the subject, we consider settings in which the sequence of stepsizes in SGD can potentially depend on the number of neurons and the iterations.

Nonreversible MCMC from conditional invertible transforms: a complete recipe with convergence guarantees

no code implementations31 Dec 2020 Achille Thin, Nikita Kotelevskii, Christophe Andrieu, Alain Durmus, Eric Moulines, Maxim Panov

This paper fills the gap by developing general tools to ensure that a class of nonreversible Markov kernels, possibly relying on complex transforms, has the desired invariance property and leads to convergent algorithms.

On Riemannian Stochastic Approximation Schemes with Fixed Step-Size

no code implementations15 Feb 2021 Alain Durmus, Pablo Jiménez, Éric Moulines, Salem Said

This result gives rise to a family of stationary distributions indexed by the step-size, which is further shown to converge to a Dirac measure, concentrated at the solution of the problem at hand, as the step-size goes to 0.

Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie

no code implementations8 Mar 2021 Rémi Laumont, Valentin De Bortoli, Andrés Almansa, Julie Delon, Alain Durmus, Marcelo Pereyra

The proposed algorithms are demonstrated on several canonical problems such as image deblurring, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and quantification.

Bayesian Inference Deblurring +2

NEO: Non Equilibrium Sampling on the Orbit of a Deterministic Transform

1 code implementation17 Mar 2021 Achille Thin, Yazid Janati, Sylvain Le Corff, Charles Ollion, Arnaud Doucet, Alain Durmus, Eric Moulines, Christian Robert

Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant Z are challenging problems.

QLSD: Quantised Langevin stochastic dynamics for Bayesian federated learning

no code implementations1 Jun 2021 Maxime Vono, Vincent Plassier, Alain Durmus, Aymeric Dieuleveut, Eric Moulines

The objective of Federated Learning (FL) is to perform statistical inference for data which are decentralised and stored locally on networked clients.

Federated Learning

Tight High Probability Bounds for Linear Stochastic Approximation with Fixed Stepsize

no code implementations NeurIPS 2021 Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, Kevin Scaman, Hoi-To Wai

This family of methods arises in many machine learning tasks and is used to obtain approximate solutions of a linear system $\bar{A}\theta = \bar{b}$ for which $\bar{A}$ and $\bar{b}$ can only be accessed through random estimates $\{({\bf A}_n, {\bf b}_n): n \in \mathbb{N}^*\}$.

Vocal Bursts Intensity Prediction

DG-LMC: A Turn-key and Scalable Synchronous Distributed MCMC Algorithm via Langevin Monte Carlo within Gibbs

no code implementations11 Jun 2021 Vincent Plassier, Maxime Vono, Alain Durmus, Eric Moulines

Performing reliable Bayesian inference on a big data scale is becoming a keystone in the modern era of machine learning.

Bayesian Inference

NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform

1 code implementation NeurIPS 2021 Achille Thin, Yazid Janati El Idrissi, Sylvain Le Corff, Charles Ollion, Eric Moulines, Arnaud Doucet, Alain Durmus, Christian Robert

Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant $\mathrm{Z}$ are challenging problems.

On Maximum-a-Posteriori estimation with Plug & Play priors and stochastic gradient descent

no code implementations16 Jan 2022 Rémi Laumont, Valentin De Bortoli, Andrés Almansa, Julie Delon, Alain Durmus, Marcelo Pereyra

Bayesian methods to solve imaging inverse problems usually combine an explicit data likelihood function with a prior distribution that explicitly models expected properties of the solution.

Image Denoising

FedPop: A Bayesian Approach for Personalised Federated Learning

no code implementations7 Jun 2022 Nikita Kotelevskii, Maxime Vono, Eric Moulines, Alain Durmus

We provide non-asymptotic convergence guarantees for the proposed algorithms and illustrate their performances on various personalised federated learning tasks.

Federated Learning Uncertainty Quantification

Finite-time High-probability Bounds for Polyak-Ruppert Averaged Iterates of Linear Stochastic Approximation

no code implementations10 Jul 2022 Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov

Our finite-time instance-dependent bounds for the averaged LSA iterates are sharp in the sense that the leading term we obtain coincides with the local asymptotic minimax limit.

Rosenthal-type inequalities for linear statistics of Markov chains

no code implementations10 Mar 2023 Alain Durmus, Eric Moulines, Alexey Naumov, Sergey Samsonov, Marina Sheshukova

In this paper, we establish novel deviation bounds for additive functionals of geometrically ergodic Markov chains similar to Rosenthal and Bernstein inequalities for sums of independent random variables.

Vocal Bursts Type Prediction

Non-asymptotic convergence bounds for Sinkhorn iterates and their gradients: a coupling approach

no code implementations13 Apr 2023 Giacomo Greco, Maxence Noble, Giovanni Conforti, Alain Durmus

Our approach is novel in that it is purely probabilistic and relies on coupling by reflection techniques for controlled diffusions on the torus.

On the convergence of dynamic implementations of Hamiltonian Monte Carlo and No U-Turn Samplers

no code implementations7 Jul 2023 Alain Durmus, Samuel Gruffaz, Miika Kailas, Eero Saksman, Matti Vihola

Under conditions similar to the ones existing for HMC, we also show that NUTS is geometrically ergodic.

VITS : Variational Inference Thomson Sampling for contextual bandits

no code implementations19 Jul 2023 Pierre Clavier, Tom Huix, Alain Durmus

In this paper, we introduce and analyze a variant of the Thompson sampling (TS) algorithm for contextual bandits.

Multi-Armed Bandits Thompson Sampling +1

Score diffusion models without early stopping: finite Fisher information is all you need

no code implementations23 Aug 2023 Giovanni Conforti, Alain Durmus, Marta Gentiloni Silveri

Our study provides a rigorous analysis, yielding simple, improved and sharp convergence bounds in KL applicable to any data distribution with finite Fisher information with respect to the standard Gaussian distribution.

Implicit Bias in Noisy-SGD: With Applications to Differentially Private Training

no code implementations13 Feb 2024 Tom Sander, Maxime Sylvestre, Alain Durmus

We first show that the phenomenon extends to Noisy-SGD (DP-SGD without clipping), suggesting that the stochasticity (and not the clipping) is the cause of this implicit bias, even with additional isotropic Gaussian noise.

Unbiased constrained sampling with Self-Concordant Barrier Hamiltonian Monte Carlo

1 code implementation NeurIPS 2023 Maxence Noble, Valentin De Bortoli, Alain Durmus

In this paper, we propose Barrier Hamiltonian Monte Carlo (BHMC), a version of the HMC algorithm which aims at sampling from a Gibbs distribution $\pi$ on a manifold $\mathrm{M}$, endowed with a Hessian metric $\mathfrak{g}$ derived from a self-concordant barrier.

Watermarking Makes Language Models Radioactive

no code implementations22 Feb 2024 Tom Sander, Pierre Fernandez, Alain Durmus, Matthijs Douze, Teddy Furon

This paper investigates the radioactivity of LLM-generated texts, i. e. whether it is possible to detect that such input was used as training data.

Differentially Private Representation Learning via Image Captioning

no code implementations4 Mar 2024 Tom Sander, Yaodong Yu, Maziar Sanjabi, Alain Durmus, Yi Ma, Kamalika Chaudhuri, Chuan Guo

In this work, we show that effective DP representation learning can be done via image captioning and scaling up to internet-scale multimodal datasets.

Image Captioning Representation Learning

Incentivized Learning in Principal-Agent Bandit Games

no code implementations6 Mar 2024 Antoine Scheid, Daniil Tiapkin, Etienne Boursier, Aymeric Capitaine, El Mahdi El Mhamdi, Eric Moulines, Michael I. Jordan, Alain Durmus

This work considers a repeated principal-agent bandit game, where the principal can only interact with her environment through the agent.

Divide-and-Conquer Posterior Sampling for Denoising Diffusion Priors

no code implementations18 Mar 2024 Yazid Janati, Alain Durmus, Eric Moulines, Jimmy Olsson

In this work, we take a different approach and utilize the specific structure of the DDM prior to define a set of intermediate and simpler posterior sampling problems, resulting in a lower approximation error compared to previous methods.

Denoising Image Restoration

Cannot find the paper you are looking for? You can Submit a new open access paper.