no code implementations • 1 Feb 2024 • Theodore Papamarkou, Maria Skoularidou, Konstantina Palla, Laurence Aitchison, Julyan Arbel, David Dunson, Maurizio Filippone, Vincent Fortuin, Philipp Hennig, Jose Miguel Hernandez Lobato, Aliaksandr Hubin, Alexander Immer, Theofanis Karaletsos, Mohammad Emtiyaz Khan, Agustinus Kristiadi, Yingzhen Li, Stephan Mandt, Christopher Nemeth, Michael A. Osborne, Tim G. J. Rudner, David Rügamer, Yee Whye Teh, Max Welling, Andrew Gordon Wilson, Ruqi Zhang
In the current landscape of deep learning research, there is a predominant emphasis on achieving high predictive accuracy in supervised tasks involving large image and language datasets.
no code implementations • 24 May 2023 • Louis Sharrock, Daniel Dodd, Christopher Nemeth
Our methods are based on the perspective of marginal maximum likelihood estimation as an optimization problem: namely, as the minimization of a free energy functional.
1 code implementation • 24 May 2023 • Louis Sharrock, Lester Mackey, Christopher Nemeth
We introduce a suite of new particle-based algorithms for sampling in constrained domains which are entirely learning rate free.
1 code implementation • 26 Jan 2023 • Louis Sharrock, Christopher Nemeth
In recent years, particle-based variational inference (ParVI) methods such as Stein variational gradient descent (SVGD) have grown in popularity as scalable methods for Bayesian inference.
1 code implementation • 28 Oct 2022 • Srshti Putcha, Christopher Nemeth, Paul Fearnhead
Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data.
no code implementations • 8 Aug 2022 • Callum Vyner, Christopher Nemeth, Chris Sherlock
Divide-and-conquer strategies for Monte Carlo algorithms are an increasingly popular approach to making Bayesian inference scalable to large data sets.
no code implementations • 3 Jun 2021 • Thomas Pinder, Kathryn Turnbull, Christopher Nemeth, David Leslie
We derive a Matern Gaussian process (GP) on the vertices of a hypergraph.
no code implementations • 27 May 2021 • Jeremie Coullon, Leah South, Christopher Nemeth
Stochastic gradient Markov chain Monte Carlo (SGMCMC) is a popular class of algorithms for scalable Bayesian inference.
1 code implementation • 25 Sep 2020 • Thomas Pinder, Christopher Nemeth, David Leslie
We show how to use Stein variational gradient descent (SVGD) to carry out inference in Gaussian process (GP) models with non-Gaussian likelihoods and large data volumes.
1 code implementation • 16 Jul 2019 • Christopher Nemeth, Paul Fearnhead
In this paper, we focus on a particular class of scalable Monte Carlo algorithms, stochastic gradient Markov chain Monte Carlo (SGMCMC) which utilises data subsampling techniques to reduce the per-iteration cost of MCMC.
2 code implementations • 29 Jan 2019 • Christopher Aicher, Srshti Putcha, Christopher Nemeth, Paul Fearnhead, Emily B. Fox
We evaluate our proposed particle buffered stochastic gradient using stochastic gradient MCMC for inference on both long sequential synthetic and minute-resolution financial returns data, demonstrating the importance of this class of methods.
3 code implementations • 21 Dec 2018 • Jamie Fairbrother, Christopher Nemeth, Maxime Rischard, Johanni Brea, Thomas Pinder
Gaussian processes are a class of flexible nonparametric Bayesian tools that are widely used across the sciences, and in industry, to model complex data sources.
1 code implementation • NeurIPS 2018 • Jack Baker, Paul Fearnhead, Emily B. Fox, Christopher Nemeth
Unfortunately, many popular large-scale Bayesian models, such as network or topic models, require inference on sparse simplex spaces.
1 code implementation • 2 Oct 2017 • Jack Baker, Paul Fearnhead, Emily B. Fox, Christopher Nemeth
To do this, the package uses the software library TensorFlow, which has a variety of statistical distributions and mathematical operations as standard, meaning a wide class of models can be built using this framework.
1 code implementation • NeurIPS 2019 • Christopher Nemeth, Fredrik Lindsten, Maurizio Filippone, James Hensman
In this paper, we introduce the pseudo-extended MCMC method as a simple approach for improving the mixing of the MCMC sampler for multi-modal posterior distributions.
1 code implementation • 16 Jun 2017 • Jack Baker, Paul Fearnhead, Emily B. Fox, Christopher Nemeth
These methods use a noisy estimate of the gradient of the log posterior, which reduces the per iteration computational cost of the algorithm.
no code implementations • 27 May 2016 • Christopher Nemeth, Chris Sherlock
This approximation is exploited through three methodologies: firstly a Hamiltonian Monte Carlo algorithm targeting the expectation of the posterior density provides a sample from an approximation to the posterior; secondly, evaluating the true posterior at the sampled points leads to an importance sampler that, asymptotically, targets the true posterior expectations; finally, an alternative importance sampler uses the full Gaussian-process distribution of the approximation to the log-posterior density to re-weight any initial sample and provide both an estimate of the posterior expectation and a measure of the uncertainty in it.
no code implementations • 23 Dec 2014 • Christopher Nemeth, Chris Sherlock, Paul Fearnhead
This paper proposes a new sampling scheme based on Langevin dynamics that is applicable within pseudo-marginal and particle Markov chain Monte Carlo algorithms.
no code implementations • 4 Jun 2013 • Christopher Nemeth, Paul Fearnhead, Lyudmila Mihaylova
This paper introduces an alternative approach for estimating these terms at a computational cost that is linear in the number of particles.