no code implementations • 25 Oct 2023 • Yuling Yao, Bruno Régaldo-Saint Blancard, Justin Domke

Simulation-based inference has been popular for amortized Bayesian computation.

2 code implementations • NeurIPS 2023 • Chirag Modi, Charles Margossian, Yuling Yao, Robert Gower, David Blei, Lawrence Saul

We study how GSM-VI behaves as a function of the problem dimensionality, the condition number of the target covariance matrix (when the target is Gaussian), and the degree of mismatch between the approximating and exact posterior distribution.

1 code implementation • NeurIPS 2023 • Yuling Yao, Justin Domke

To check the accuracy of Bayesian computations, it is common to use rank-based simulation-based calibration (SBC).

no code implementations • 12 May 2023 • Yuling Yao, Luiz Max Carvalho, Diego Mesquita, Yann McLatchie

Currently, these predictive distributions are almost exclusively combined using linear mixtures such as Bayesian model averaging, Bayesian stacking, and mixture of experts.

1 code implementation • 22 Jan 2021 • Yuling Yao, Gregor Pirš, Aki Vehtari, Andrew Gelman

We show that stacking is most effective when model predictive performance is heterogeneous in inputs, and we can further improve the stacked mixture with a hierarchical model.

1 code implementation • 1 Sep 2020 • Yuling Yao, Collin Cademartori, Aki Vehtari, Andrew Gelman

The normalizing constant plays an important role in Bayesian computation, and there is a large literature on methods for computing or approximating normalizing constants that cannot be evaluated in closed form.

Computation Methodology

1 code implementation • 22 Jun 2020 • Yuling Yao, Aki Vehtari, Andrew Gelman

When working with multimodal Bayesian posterior distributions, Markov chain Monte Carlo (MCMC) algorithms have difficulty moving between modes, and default variational or mode-based approximate inferences will understate posterior uncertainty.

no code implementations • 23 May 2019 • Oscar Chang, Yuling Yao, David Williams-King, Hod Lipson

Two main obstacles preventing the widespread adoption of variational Bayesian neural networks are the high parameter overhead that makes them infeasible on large networks, and the difficulty of implementation, which can be thought of as "programming overhead."

1 code implementation • ICML 2018 • Yuling Yao, Aki Vehtari, Daniel Simpson, Andrew Gelman

While it's always possible to compute a variational approximation to a posterior distribution, it can be difficult to discover problems with this approximation.

2 code implementations • 6 Apr 2017 • Yuling Yao, Aki Vehtari, Daniel Simpson, Andrew Gelman

The widely recommended procedure of Bayesian model averaging is flawed in the M-open setting in which the true data-generating process is not one of the candidate models being fit.

Methodology Computation

9 code implementations • 9 Jul 2015 • Aki Vehtari, Daniel Simpson, Andrew Gelman, Yuling Yao, Jonah Gabry

Importance weighting is a general way to adjust Monte Carlo integration to account for draws from the wrong distribution, but the resulting estimate can be highly variable when the importance ratios have a heavy right tail.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.