2 code implementations • 24 Jan 2022 • Kilian Schulze-Forster, Gaël Richard, Liam Kelley, Clement S. J. Doire, Roland Badeau
Integrating domain knowledge in the form of source models into a data-driven method leads to high data efficiency: the proposed approach achieves good separation quality even when trained on less than three minutes of audio.
2 code implementations • NeurIPS 2021 • Kimia Nadjahi, Alain Durmus, Pierre E. Jacob, Roland Badeau, Umut Şimşekli
The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications as an alternative to the Wasserstein distance and offers significant computational and statistical benefits.
1 code implementation • 28 Oct 2019 • Kimia Nadjahi, Valentin De Bortoli, Alain Durmus, Roland Badeau, Umut Şimşekli
Approximate Bayesian Computation (ABC) is a popular method for approximate inference in generative models with intractable but easy-to-sample likelihood.
1 code implementation • NeurIPS 2019 • Kimia Nadjahi, Alain Durmus, Umut Şimşekli, Roland Badeau
Minimum expected distance estimation (MEDE) algorithms have been widely used for probabilistic models with intractable likelihood functions and they have become increasingly popular due to their use in implicit generative modeling (e. g. Wasserstein generative adversarial networks, Wasserstein autoencoders).
1 code implementation • NeurIPS 2019 • Soheil Kolouri, Kimia Nadjahi, Umut Simsekli, Roland Badeau, Gustavo K. Rohde
The SW distance, specifically, was shown to have similar properties to the Wasserstein distance, while being much simpler to compute, and is therefore used in various applications including generative modeling and general supervised/unsupervised learning.
no code implementations • NeurIPS 2016 • Alain Durmus, Umut Simsekli, Eric Moulines, Roland Badeau, Gaël Richard
We illustrate our framework on the popular Stochastic Gradient Langevin Dynamics (SGLD) algorithm and propose a novel SG-MCMC algorithm referred to as Stochastic Gradient Richardson-Romberg Langevin Dynamics (SGRRLD).
no code implementations • 10 Feb 2016 • Umut Şimşekli, Roland Badeau, A. Taylan Cemgil, Gaël Richard
These second order methods directly approximate the inverse Hessian by using a limited history of samples and their gradients.