no code implementations • 27 Jul 2023 • Kyurae Kim, Yian Ma, Jacob R. Gardner
We prove that black-box variational inference (BBVI) with control variates, particularly the sticking-the-landing (STL) estimator, converges at a geometric (traditionally called "linear") rate under perfect variational family specification.
no code implementations • 5 Jul 2023 • Xunpeng Huang, Hanze Dong, Yifan Hao, Yian Ma, Tong Zhang
The efficacy of modern generative models is commonly contingent upon the precision of score estimation along the diffusion path, with a focus on diffusion models and their ability to generate high-quality data samples.
no code implementations • 24 May 2023 • Kyurae Kim, Kaiwen Wu, Jisu Oh, Yian Ma, Jacob R. Gardner
We provide the first convergence guarantee for full black-box variational inference (BBVI), also known as Monte Carlo variational inference.
1 code implementation • 7 May 2023 • Dongxia Wu, Ruijia Niu, Matteo Chinazzi, Yian Ma, Rose Yu
To balance quality and cost, various domain areas of science and engineering run simulations at multiple levels of sophistication.
no code implementations • 29 Sep 2021 • Ruoqi Shen, Liyao Gao, Yian Ma
Early stopping is a simple and widely used method to prevent over-training neural networks.
no code implementations • 30 Jun 2021 • Ghassen Jerfel, Serena Wang, Clara Fannjiang, Katherine A. Heller, Yian Ma, Michael I. Jordan
We thus propose a novel combination of optimization and sampling techniques for approximate Bayesian inference by constructing an IS proposal distribution through the minimization of a forward KL (FKL) divergence.
no code implementations • pproximateinference AABI Symposium 2021 • Ghassen Jerfel, Serena Lutong Wang, Clara Fannjiang, Katherine A Heller, Yian Ma, Michael Jordan
Variational Inference (VI) is a popular alternative to asymptotically exact sampling in Bayesian inference.
no code implementations • 6 Nov 2020 • Alexander D'Amour, Katherine Heller, Dan Moldovan, Ben Adlam, Babak Alipanahi, Alex Beutel, Christina Chen, Jonathan Deaton, Jacob Eisenstein, Matthew D. Hoffman, Farhad Hormozdiari, Neil Houlsby, Shaobo Hou, Ghassen Jerfel, Alan Karthikesalingam, Mario Lucic, Yian Ma, Cory McLean, Diana Mincu, Akinori Mitani, Andrea Montanari, Zachary Nado, Vivek Natarajan, Christopher Nielson, Thomas F. Osborne, Rajiv Raman, Kim Ramasamy, Rory Sayres, Jessica Schrouff, Martin Seneviratne, Shannon Sequeira, Harini Suresh, Victor Veitch, Max Vladymyrov, Xuezhi Wang, Kellie Webster, Steve Yadlowsky, Taedong Yun, Xiaohua Zhai, D. Sculley
Predictors returned by underspecified pipelines are often treated as equivalent based on their training domain performance, but we show here that such predictors can behave very differently in deployment domains.
no code implementations • pproximateinference AABI Symposium 2019 • Matthew D. Hoffman, Yian Ma
Variational inference (VI) and Markov chain Monte Carlo (MCMC) are approximate posterior inference algorithms that are often said to have complementary strengths, with VI being fast but biased and MCMC being slower but asymptotically unbiased.