Search Results for author: Jonathan H. Huggins

Found 23 papers, 7 papers with code

Reproducible Parameter Inference Using Bagged Posteriors

no code implementations3 Nov 2023 Jonathan H. Huggins, Jeffrey W. Miller

Under model misspecification, it is known that Bayesian posteriors often do not properly quantify uncertainty about true or pseudo-true parameters.

Uncertainty Quantification valid

A Targeted Accuracy Diagnostic for Variational Approximations

1 code implementation24 Feb 2023 Yu Wang, Mikołaj Kasprzak, Jonathan H. Huggins

Variational Inference (VI) is an attractive alternative to Markov Chain Monte Carlo (MCMC) due to its computational efficiency in the case of large datasets and/or complex models with high-dimensional parameters.

Computational Efficiency Variational Inference

Tuning Stochastic Gradient Algorithms for Statistical Inference via Large-Sample Asymptotics

no code implementations25 Jul 2022 Jeffrey Negrea, Jun Yang, Haoyue Feng, Daniel M. Roy, Jonathan H. Huggins

The tuning of stochastic gradient algorithms (SGAs) for optimization and sampling is often based on heuristics and trial-and-error rather than generalizable theory.

Robust, Automated, and Accurate Black-box Variational Inference

1 code implementation29 Mar 2022 Manushi Welandawe, Michael Riis Andersen, Aki Vehtari, Jonathan H. Huggins

RAABBVI adaptively decreases the learning rate by detecting convergence of the fixed--learning-rate iterates, then estimates the symmetrized Kullback--Leiber (KL) divergence between the current variational approximation and the optimal one.

Bayesian Inference Stochastic Optimization +1

Challenges for BBVI with Normalizing Flows

no code implementations ICML Workshop INNF 2021 Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael Riis Andersen, Jonathan H. Huggins, Aki Vehtari

Current black-box variational inference (BBVI) methods require the user to make numerous design choices---such as the selection of variational objective and approximating family---yet there is little principled guidance on how to do so.

Variational Inference

Challenges and Opportunities in High Dimensional Variational Inference

no code implementations NeurIPS 2021 Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael Riis Andersen, Jonathan H. Huggins, Aki Vehtari

Our framework and supporting experiments help to distinguish between the behavior of BBVI methods for approximating low-dimensional versus moderate-to-high-dimensional posteriors.

Variational Inference Vocal Bursts Intensity Prediction

Independent versus truncated finite approximations for Bayesian nonparametric inference

no code implementations NeurIPS Workshop ICBINB 2020 Tin D. Nguyen, Jonathan H. Huggins, Lorenzo Masoero, Lester Mackey, Tamara Broderick

Bayesian nonparametric models based on completely random measures (CRMs) offers flexibility when the number of clusters or latent components in a data set is unknown.

Image Denoising

Validated Variational Inference via Practical Posterior Error Bounds

1 code implementation9 Oct 2019 Jonathan H. Huggins, Mikołaj Kasprzak, Trevor Campbell, Tamara Broderick

Finally, we demonstrate the utility of our proposed workflow and error bounds on a robust regression problem and on a real-data example with a widely used multilevel hierarchical model.

Bayesian Inference Variational Inference

LR-GLM: High-Dimensional Bayesian Inference Using Low-Rank Data Approximations

no code implementations17 May 2019 Brian L. Trippe, Jonathan H. Huggins, Raj Agrawal, Tamara Broderick

Due to the ease of modern data collection, applied statisticians often have access to a large set of covariates that they wish to relate to some observed outcome.

Bayesian Inference Vocal Bursts Intensity Prediction

The Kernel Interaction Trick: Fast Bayesian Discovery of Pairwise Interactions in High Dimensions

1 code implementation16 May 2019 Raj Agrawal, Jonathan H. Huggins, Brian Trippe, Tamara Broderick

Discovering interaction effects on a response of interest is a fundamental problem faced in biology, medicine, economics, and many other scientific disciplines.

Uncertainty Quantification

Data-dependent compression of random features for large-scale kernel approximation

no code implementations9 Oct 2018 Raj Agrawal, Trevor Campbell, Jonathan H. Huggins, Tamara Broderick

Random feature maps (RFMs) and the Nystrom method both consider low-rank approximations to the kernel matrix as a potential solution.

feature selection

Scalable Gaussian Process Inference with Finite-data Mean and Variance Guarantees

no code implementations26 Jun 2018 Jonathan H. Huggins, Trevor Campbell, Mikołaj Kasprzak, Tamara Broderick

We develop an approach to scalable approximate GP regression with finite-data guarantees on the accuracy of pointwise posterior mean and variance estimates.

Gaussian Processes regression +1

Random Feature Stein Discrepancies

1 code implementation NeurIPS 2018 Jonathan H. Huggins, Lester Mackey

Computable Stein discrepancies have been deployed for a variety of applications, ranging from sampler selection in posterior inference to approximate Bayesian inference to goodness-of-fit testing.

Bayesian Inference

PASS-GLM: polynomial approximate sufficient statistics for scalable Bayesian GLM inference

1 code implementation NeurIPS 2017 Jonathan H. Huggins, Ryan P. Adams, Tamara Broderick

We provide theoretical guarantees on the quality of point (MAP) estimates, the approximate posterior, and posterior mean and uncertainty estimates.

regression

Quantifying the accuracy of approximate diffusions and Markov chains

no code implementations20 May 2016 Jonathan H. Huggins, James Zou

As an illustration, we apply our framework to derive finite-sample error bounds of approximate unadjusted Langevin dynamics.

Coresets for Scalable Bayesian Logistic Regression

2 code implementations NeurIPS 2016 Jonathan H. Huggins, Trevor Campbell, Tamara Broderick

We demonstrate the efficacy of our approach on a number of synthetic and real-world datasets, and find that, in practice, the size of the coreset is independent of the original dataset size.

Bayesian Inference regression +1

Risk and Regret of Hierarchical Bayesian Learners

no code implementations19 May 2015 Jonathan H. Huggins, Joshua B. Tenenbaum

Common statistical practice has shown that the full power of Bayesian methods is not realized until hierarchical priors are used, as these allow for greater "robustness" and the ability to "share statistical strength."

feature selection

JUMP-Means: Small-Variance Asymptotics for Markov Jump Processes

no code implementations1 Mar 2015 Jonathan H. Huggins, Karthik Narasimhan, Ardavan Saeedi, Vikash K. Mansinghka

We derive the small-variance asymptotics for parametric and nonparametric MJPs for both directly observed and hidden state models.

Detailed Derivations of Small-Variance Asymptotics for some Hierarchical Bayesian Nonparametric Models

no code implementations31 Dec 2014 Jonathan H. Huggins, Ardavan Saeedi, Matthew J. Johnson

In this note we provide detailed derivations of two versions of small-variance asymptotics for hierarchical Dirichlet process (HDP) mixture models and the HDP hidden Markov model (HDP-HMM, a. k. a.

Infinite Structured Hidden Semi-Markov Models

no code implementations30 Jun 2014 Jonathan H. Huggins, Frank Wood

This paper reviews recent advances in Bayesian nonparametric techniques for constructing and performing inference in infinite hidden Markov models.

A Statistical Learning Theory Framework for Supervised Pattern Discovery

no code implementations2 Jul 2013 Jonathan H. Huggins, Cynthia Rudin

This paper formalizes a latent variable inference problem we call {\em supervised pattern discovery}, the goal of which is to find sets of observations that belong to a single ``pattern.''

Learning Theory

Cannot find the paper you are looking for? You can Submit a new open access paper.