Search Results for author: Yichuan Zhang

Found 7 papers, 0 papers with code

Gradient-based tuning of Hamiltonian Monte Carlo hyperparameters

no code implementations1 Jan 2021 Andrew Campbell, Wenlong Chen, Vincent Stimper, José Miguel Hernández-Lobato, Yichuan Zhang

Existing approaches for automating this task either optimise a proxy for mixing speed or consider the HMC chain as an implicit variational distribution and optimize a tractable lower bound that is too loose to be useful in practice.

The Theory and Algorithm of Ergodic Inference

no code implementations17 Nov 2018 Yichuan Zhang

Approximate inference algorithm is one of the fundamental research fields in machine learning.

BIG-bench Machine Learning Variational Inference

Ergodic Measure Preserving Flows

no code implementations27 Sep 2018 Yichuan Zhang, José Miguel Hernández-Lobato, Zoubin Ghahramani

Training probabilistic models with neural network components is intractable in most cases and requires to use approximations such as Markov chain Monte Carlo (MCMC), which is not scalable and requires significant hyper-parameter tuning, or mean-field variational inference (VI), which is biased.

Variational Inference

Ergodic Inference: Accelerate Convergence by Optimisation

no code implementations25 May 2018 Yichuan Zhang, José Miguel Hernández-Lobato

In this work, we aim to improve upon MCMC and VI by a novel hybrid method based on the idea of reducing simulation bias of finite-length MCMC chains using gradient-based optimisation.

Computational Efficiency Variational Inference

Semi-Separable Hamiltonian Monte Carlo for Inference in Bayesian Hierarchical Models

no code implementations NeurIPS 2014 Yichuan Zhang, Charles Sutton

Sampling from hierarchical Bayesian models is often difficult for MCMC methods, because of the strong correlations between the model parameters and the hyperparameters.

Continuous Relaxations for Discrete Hamiltonian Monte Carlo

no code implementations NeurIPS 2012 Yichuan Zhang, Zoubin Ghahramani, Amos J. Storkey, Charles A. Sutton

Continuous relaxations play an important role in discrete optimization, but have not seen much use in approximate probabilistic inference.

Quasi-Newton Methods for Markov Chain Monte Carlo

no code implementations NeurIPS 2011 Yichuan Zhang, Charles A. Sutton

The performance of Markov chain Monte Carlo methods is often sensitive to the scaling and correlations between the random variables of interest.

valid

Cannot find the paper you are looking for? You can Submit a new open access paper.