no code implementations • 1 Jan 2021 • Andrew Campbell, Wenlong Chen, Vincent Stimper, José Miguel Hernández-Lobato, Yichuan Zhang
Existing approaches for automating this task either optimise a proxy for mixing speed or consider the HMC chain as an implicit variational distribution and optimize a tractable lower bound that is too loose to be useful in practice.
no code implementations • 17 Nov 2018 • Yichuan Zhang
Approximate inference algorithm is one of the fundamental research fields in machine learning.
no code implementations • 27 Sep 2018 • Yichuan Zhang, José Miguel Hernández-Lobato, Zoubin Ghahramani
Training probabilistic models with neural network components is intractable in most cases and requires to use approximations such as Markov chain Monte Carlo (MCMC), which is not scalable and requires significant hyper-parameter tuning, or mean-field variational inference (VI), which is biased.
no code implementations • 25 May 2018 • Yichuan Zhang, José Miguel Hernández-Lobato
In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation.
no code implementations • NeurIPS 2014 • Yichuan Zhang, Charles Sutton
Sampling from hierarchical Bayesian models is often difficult for MCMC methods, because of the strong correlations between the model parameters and the hyperparameters.
no code implementations • NeurIPS 2012 • Yichuan Zhang, Zoubin Ghahramani, Amos J. Storkey, Charles A. Sutton
Continuous relaxations play an important role in discrete optimization, but have not seen much use in approximate probabilistic inference.
no code implementations • NeurIPS 2011 • Yichuan Zhang, Charles A. Sutton
The performance of Markov chain Monte Carlo methods is often sensitive to the scaling and correlations between the random variables of interest.