Search Results for author: Roland Badeau

Found 7 papers, 5 papers with code

Unsupervised Music Source Separation Using Differentiable Parametric Source Models

2 code implementations24 Jan 2022 Kilian Schulze-Forster, Gaël Richard, Liam Kelley, Clement S. J. Doire, Roland Badeau

Integrating domain knowledge in the form of source models into a data-driven method leads to high data efficiency: the proposed approach achieves good separation quality even when trained on less than three minutes of audio.

Audio Source Separation Music Source Separation +1

Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections

2 code implementations NeurIPS 2021 Kimia Nadjahi, Alain Durmus, Pierre E. Jacob, Roland Badeau, Umut Şimşekli

The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits.

Approximate Bayesian Computation with the Sliced-Wasserstein Distance

1 code implementation28 Oct 2019 Kimia Nadjahi, Valentin De Bortoli, Alain Durmus, Roland Badeau, Umut Şimşekli

Approximate Bayesian Computation (ABC) is a popular method for approximate inference in generative models with intractable but easy-to-sample likelihood.

Image Denoising

Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance

1 code implementation NeurIPS 2019 Kimia Nadjahi, Alain Durmus, Umut Şimşekli, Roland Badeau

Minimum expected distance estimation (MEDE) algorithms have been widely used for probabilistic models with intractable likelihood functions and they have become increasingly popular due to their use in implicit generative modeling (e. g. Wasserstein generative adversarial networks, Wasserstein autoencoders).

Generalized Sliced Wasserstein Distances

1 code implementation NeurIPS 2019 Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, Gustavo K. Rohde

The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning.

Stochastic Gradient Richardson-Romberg Markov Chain Monte Carlo

no code implementations NeurIPS 2016 Alain Durmus, Umut Simsekli, Eric Moulines, Roland Badeau, Gaël Richard

We illustrate our framework on the popular Stochastic Gradient Langevin Dynamics (SGLD) algorithm and propose a novel SG-MCMC algorithm referred to as Stochastic Gradient Richardson-Romberg Langevin Dynamics (SGRRLD).

Bayesian Inference

Stochastic Quasi-Newton Langevin Monte Carlo

no code implementations10 Feb 2016 Umut Şimşekli, Roland Badeau, A. Taylan Cemgil, Gaël Richard

These second order methods directly approximate the inverse Hessian by using a limited history of samples and their gradients.

Second-order methods

Cannot find the paper you are looking for? You can Submit a new open access paper.