Search Results for author: Maxence Noble

Found 5 papers, 4 papers with code

Stochastic Localization via Iterative Posterior Sampling

1 code implementation16 Feb 2024 Louis Grenioux, Maxence Noble, Marylou Gabrié, Alain Oliviero Durmus

Building upon score-based learning, new interest in stochastic localization techniques has recently emerged.

Denoising

Tree-Based Diffusion Schrödinger Bridge with Applications to Wasserstein Barycenters

1 code implementation NeurIPS 2023 Maxence Noble, Valentin De Bortoli, Arnaud Doucet, Alain Durmus

In this paper, we consider an entropic version of mOT with a tree-structured quadratic cost, i. e., a function that can be written as a sum of pairwise cost functions between the nodes of a tree.

Non-asymptotic convergence bounds for Sinkhorn iterates and their gradients: a coupling approach

no code implementations13 Apr 2023 Giacomo Greco, Maxence Noble, Giovanni Conforti, Alain Durmus

Our approach is novel in that it is purely probabilistic and relies on coupling by reflection techniques for controlled diffusions on the torus.

Unbiased constrained sampling with Self-Concordant Barrier Hamiltonian Monte Carlo

1 code implementation NeurIPS 2023 Maxence Noble, Valentin De Bortoli, Alain Durmus

In this paper, we propose Barrier Hamiltonian Monte Carlo (BHMC), a version of the HMC algorithm which aims at sampling from a Gibbs distribution $\pi$ on a manifold $\mathrm{M}$, endowed with a Hessian metric $\mathfrak{g}$ derived from a self-concordant barrier.

Differentially Private Federated Learning on Heterogeneous Data

1 code implementation17 Nov 2021 Maxence Noble, Aurélien Bellet, Aymeric Dieuleveut

Federated Learning (FL) is a paradigm for large-scale distributed learning which faces two key challenges: (i) efficient training from highly heterogeneous user data, and (ii) protecting the privacy of participating users.

Federated Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.