no code implementations • 3 Nov 2023 • Jonathan H. Huggins, Jeffrey W. Miller
Under model misspecification, it is known that Bayesian posteriors often do not properly quantify uncertainty about true or pseudo-true parameters.
1 code implementation • 24 Feb 2023 • Yu Wang, Mikołaj Kasprzak, Jonathan H. Huggins
Variational Inference (VI) is an attractive alternative to Markov Chain Monte Carlo (MCMC) due to its computational efficiency in the case of large datasets and/or complex models with high-dimensional parameters.
no code implementations • 25 Jul 2022 • Jeffrey Negrea, Jun Yang, Haoyue Feng, Daniel M. Roy, Jonathan H. Huggins
The tuning of stochastic gradient algorithms (SGAs) for optimization and sampling is often based on heuristics and trial-and-error rather than generalizable theory.
1 code implementation • 29 Mar 2022 • Manushi Welandawe, Michael Riis Andersen, Aki Vehtari, Jonathan H. Huggins
RAABBVI adaptively decreases the learning rate by detecting convergence of the fixed--learning-rate iterates, then estimates the symmetrized Kullback--Leiber (KL) divergence between the current variational approximation and the optimal one.
no code implementations • ICML Workshop INNF 2021 • Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael Riis Andersen, Jonathan H. Huggins, Aki Vehtari
Current black-box variational inference (BBVI) methods require the user to make numerous design choices---such as the selection of variational objective and approximating family---yet there is little principled guidance on how to do so.
no code implementations • NeurIPS 2021 • Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael Riis Andersen, Jonathan H. Huggins, Aki Vehtari
Our framework and supporting experiments help to distinguish between the behavior of BBVI methods for approximating low-dimensional versus moderate-to-high-dimensional posteriors.
no code implementations • NeurIPS Workshop ICBINB 2020 • Tin D. Nguyen, Jonathan H. Huggins, Lorenzo Masoero, Lester Mackey, Tamara Broderick
Bayesian nonparametric models based on completely random measures (CRMs) offers flexibility when the number of clusters or latent components in a data set is unknown.
no code implementations • NeurIPS 2020 • Akash Kumar Dhaka, Alejandro Catalina, Michael Riis Andersen, Måns Magnusson, Jonathan H. Huggins, Aki Vehtari
We consider the problem of fitting variational posterior approximations using stochastic optimization methods.
1 code implementation • 9 Oct 2019 • Jonathan H. Huggins, Mikołaj Kasprzak, Trevor Campbell, Tamara Broderick
Finally, we demonstrate the utility of our proposed workflow and error bounds on a robust regression problem and on a real-data example with a widely used multilevel hierarchical model.
no code implementations • 17 May 2019 • Brian L. Trippe, Jonathan H. Huggins, Raj Agrawal, Tamara Broderick
Due to the ease of modern data collection, applied statisticians often have access to a large set of covariates that they wish to relate to some observed outcome.
1 code implementation • 16 May 2019 • Raj Agrawal, Jonathan H. Huggins, Brian Trippe, Tamara Broderick
Discovering interaction effects on a response of interest is a fundamental problem faced in biology, medicine, economics, and many other scientific disciplines.
no code implementations • 9 Oct 2018 • Raj Agrawal, Trevor Campbell, Jonathan H. Huggins, Tamara Broderick
Random feature maps (RFMs) and the Nystrom method both consider low-rank approximations to the kernel matrix as a potential solution.
no code implementations • 25 Sep 2018 • Jonathan H. Huggins, Trevor Campbell, Mikołaj Kasprzak, Tamara Broderick
Bayesian inference typically requires the computation of an approximation to the posterior distribution.
no code implementations • 26 Jun 2018 • Jonathan H. Huggins, Trevor Campbell, Mikołaj Kasprzak, Tamara Broderick
We develop an approach to scalable approximate GP regression with finite-data guarantees on the accuracy of pointwise posterior mean and variance estimates.
1 code implementation • NeurIPS 2018 • Jonathan H. Huggins, Lester Mackey
Computable Stein discrepancies have been deployed for a variety of applications, ranging from sampler selection in posterior inference to approximate Bayesian inference to goodness-of-fit testing.
1 code implementation • NeurIPS 2017 • Jonathan H. Huggins, Ryan P. Adams, Tamara Broderick
We provide theoretical guarantees on the quality of point (MAP) estimates, the approximate posterior, and posterior mean and uncertainty estimates.
no code implementations • 20 May 2016 • Jonathan H. Huggins, James Zou
As an illustration, we apply our framework to derive finite-sample error bounds of approximate unadjusted Langevin dynamics.
2 code implementations • NeurIPS 2016 • Jonathan H. Huggins, Trevor Campbell, Tamara Broderick
We demonstrate the efficacy of our approach on a number of synthetic and real-world datasets, and find that, in practice, the size of the coreset is independent of the original dataset size.
no code implementations • 19 May 2015 • Jonathan H. Huggins, Joshua B. Tenenbaum
Common statistical practice has shown that the full power of Bayesian methods is not realized until hierarchical priors are used, as these allow for greater "robustness" and the ability to "share statistical strength."
no code implementations • 1 Mar 2015 • Jonathan H. Huggins, Karthik Narasimhan, Ardavan Saeedi, Vikash K. Mansinghka
We derive the small-variance asymptotics for parametric and nonparametric MJPs for both directly observed and hidden state models.
no code implementations • 31 Dec 2014 • Jonathan H. Huggins, Ardavan Saeedi, Matthew J. Johnson
In this note we provide detailed derivations of two versions of small-variance asymptotics for hierarchical Dirichlet process (HDP) mixture models and the HDP hidden Markov model (HDP-HMM, a. k. a.
no code implementations • 30 Jun 2014 • Jonathan H. Huggins, Frank Wood
This paper reviews recent advances in Bayesian nonparametric techniques for constructing and performing inference in infinite hidden Markov models.
no code implementations • 2 Jul 2013 • Jonathan H. Huggins, Cynthia Rudin
This paper formalizes a latent variable inference problem we call {\em supervised pattern discovery}, the goal of which is to find sets of observations that belong to a single ``pattern.''