no code implementations • 19 Sep 2023 • Vasilis Gkolemis, Michael Gutmann, Henri Pesonen
Our implementation can be used in two ways.
no code implementations • 29 Jul 2022 • Benjamin Rhodes, Michael Gutmann
The recent introduction of gradient-based MCMC for discrete spaces holds great promise, and comes with the tantalising possibility of new discrete counterparts to celebrated continuous methods such as MALA and HMC.
no code implementations • 8 Nov 2020 • Vasileios Gkolemis, Michael Gutmann
Approximate Bayesian Computation (ABC) methods, also known as likelihood-free inference techniques, are a class of models used for performing inference when the likelihood is intractable.
1 code implementation • 20 Oct 2020 • Yanzhi Chen, Dinghuai Zhang, Michael Gutmann, Aaron Courville, Zhanxing Zhu
We consider the fundamental problem of how to automatically construct summary statistics for implicit generative models where the evaluation of the likelihood function is intractable, but sampling data from the model is possible.
1 code implementation • 3 May 2019 • Marko Järvenpää, Michael Gutmann, Aki Vehtari, Pekka Marttinen
We consider Bayesian inference when only a limited number of noisy log-likelihood evaluations can be obtained.
1 code implementation • 23 Oct 2018 • Steven Kleinegesse, Michael Gutmann
Bayesian experimental design involves the optimal allocation of resources in an experiment, with the aim of optimising cost and performance.
1 code implementation • 18 Oct 2018 • Benjamin Rhodes, Michael Gutmann
The core idea is to use a variational lower bound to the NCE objective function, which can be optimised in the same fashion as the evidence lower bound (ELBO) in standard variational inference (VI).
no code implementations • 20 Oct 2016 • Marko Järvenpää, Michael Gutmann, Aki Vehtari, Pekka Marttinen
Approximate Bayesian computation (ABC) can be used for model fitting when the likelihood function is intractable but simulating from the model is feasible.
no code implementations • 14 Feb 2012 • Michael Gutmann, Jun-Ichiro Hirayama
We show that the Bregman divergence provides a rich framework to estimate unnormalized statistical models for continuous or discrete random variables, that is, models which do not integrate or sum to one, respectively.