1 code implementation • 24 Oct 2024 • Naitong Chen, Jonathan H. Huggins, Trevor Campbell
A Bayesian coreset is a small, weighted subset of a data set that replaces the full data during inference to reduce computational cost.
no code implementations • 24 Oct 2024 • Tiange Liu, Nikola Surjanovic, Miguel Biron-Lattes, Alexandre Bouchard-Côté, Trevor Campbell
Many common Markov chain Monte Carlo (MCMC) kernels can be formulated using a deterministic involutive proposal with a step size parameter.
no code implementations • 20 May 2024 • Trevor Campbell
The lower bounds require only mild model assumptions typical of Bayesian asymptotic analyses, while the upper bounds require the log-likelihood functions to satisfy a generalized subexponentiality criterion that is weaker than conditions used in earlier work.
no code implementations • 14 Feb 2024 • Alexandre Bouchard-Côté, Trevor Campbell, Geoff Pleiss, Nikola Surjanovic
This paper is intended to appear as a chapter for the Handbook of Markov Chain Monte Carlo.
1 code implementation • 25 Oct 2023 • Naitong Chen, Trevor Campbell
A Bayesian coreset is a small, weighted subset of data that replaces the full dataset during inference in order to reduce computational cost.
1 code implementation • 29 Aug 2023 • Gian Carlo Diluvi, Benjamin Bloem-Reddy, Trevor Campbell
First, we develop a measure-preserving and discrete (MAD) invertible map that leaves the discrete target invariant, and then create a mixed variational flow (MAD Mix) based on that map.
no code implementations • 21 Apr 2023 • Steven Winter, Trevor Campbell, Lizhen Lin, Sanvesh Srivastava, David B. Dunson
Bayesian models are a powerful tool for studying complex data, allowing the analyst to encode rich hierarchical dependencies and leverage prior information.
no code implementations • 17 Jun 2022 • Berend Zwartsenberg, Adam Ścibior, Matthew Niedoba, Vasileios Lioutas, Yunpeng Liu, Justice Sefas, Setareh Dabiri, Jonathan Wilder Lavington, Trevor Campbell, Frank Wood
We present a novel, conditional generative probabilistic model of set-valued data with a tractable log density.
2 code implementations • 16 May 2022 • Zuheng Xu, Naitong Chen, Trevor Campbell
This work presents mixed variational flows (MixFlows), a new variational family that consists of a mixture of repeated applications of a map to an initial reference distribution.
1 code implementation • 18 Mar 2022 • Cian Naik, Judith Rousseau, Trevor Campbell
Bayesian coresets approximate a posterior distribution by building a small weighted subset of the data points.
1 code implementation • 11 Mar 2022 • Naitong Chen, Zuheng Xu, Trevor Campbell
A Bayesian coreset is a small, weighted subset of data that replaces the full dataset during Bayesian inference, with the goal of reducing computational cost.
1 code implementation • 13 Apr 2021 • Zuheng Xu, Trevor Campbell
Gaussian variational inference and the Laplace approximation are popular alternatives to Markov chain Monte Carlo that formulate Bayesian posterior inference as an optimization problem, enabling the use of simple and scalable stochastic optimization algorithms.
1 code implementation • 15 Feb 2021 • Saifuddin Syed, Vittorio Romaniello, Trevor Campbell, Alexandre Bouchard-Côté
Parallel tempering (PT) is a class of Markov chain Monte Carlo algorithms that constructs a path of distributions annealing between a tractable reference and an intractable target, and then interchanges states along the path to improve mixing in the target.
Computation 65C05
1 code implementation • NeurIPS 2020 • Dionysis Manousakas, Zuheng Xu, Cecilia Mascolo, Trevor Campbell
Standard Bayesian inference algorithms are prohibitively expensive in the regime of modern large-scale data.
1 code implementation • 27 Nov 2020 • Sina Amini Niaki, Ehsan Haghighat, Trevor Campbell, Anoush Poursartip, Reza Vaziri
We present a Physics-Informed Neural Network (PINN) to simulate the thermochemical evolution of a composite material on a tool undergoing cure in an autoclave.
no code implementations • NeurIPS Workshop ICBINB 2020 • Diana Cai, Trevor Campbell, Tamara Broderick
Increasingly, though, data science papers suggest potential alternatives beyond vanilla FMMs, such as power posteriors, coarsening, and related methods.
no code implementations • 8 Jul 2020 • Diana Cai, Trevor Campbell, Tamara Broderick
In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of components converges to 0 in the limit of infinite data.
no code implementations • 24 Jun 2020 • Peiyuan Zhu, Alexandre Bouchard-Côté, Trevor Campbell
Completely random measures provide a principled approach to creating flexible unsupervised models, where the number of latent features is infinite and the number of features that influence the data grows with the size of the data set.
1 code implementation • 9 Oct 2019 • Jonathan H. Huggins, Mikołaj Kasprzak, Trevor Campbell, Tamara Broderick
Finally, we demonstrate the utility of our proposed workflow and error bounds on a robust regression problem and on a real-data example with a widely used multilevel hierarchical model.
1 code implementation • NeurIPS 2019 • Trevor Campbell, Boyan Beronov
But the automation of past coreset methods is limited because they depend on the availability of a reasonable coarse posterior approximation, which is difficult to specify in practice.
2 code implementations • NeurIPS 2019 • Trevor Campbell, Xinglong Li
We show that for any target density and any mixture component family, the output of UBVI converges to the best possible approximation in the mixture family, even when the mixture family is misspecified.
no code implementations • 28 Nov 2018 • Miriam Shiffman, William T. Stephenson, Geoffrey Schiebinger, Jonathan Huggins, Trevor Campbell, Aviv Regev, Tamara Broderick
Specifically, we extend the framework of the classical Dirichlet diffusion tree to simultaneously infer branch topology and latent cell states along continuous trajectories over the full tree.
no code implementations • 9 Oct 2018 • Raj Agrawal, Trevor Campbell, Jonathan H. Huggins, Tamara Broderick
Random feature maps (RFMs) and the Nystrom method both consider low-rank approximations to the kernel matrix as a potential solution.
no code implementations • 25 Sep 2018 • Jonathan H. Huggins, Trevor Campbell, Mikołaj Kasprzak, Tamara Broderick
Bayesian inference typically requires the computation of an approximation to the posterior distribution.
no code implementations • 26 Jun 2018 • Jonathan H. Huggins, Trevor Campbell, Mikołaj Kasprzak, Tamara Broderick
We develop an approach to scalable approximate GP regression with finite-data guarantees on the accuracy of pointwise posterior mean and variance estimates.
1 code implementation • ICML 2018 • Trevor Campbell, Tamara Broderick
Coherent uncertainty quantification is a key strength of Bayesian methods.
2 code implementations • 13 Oct 2017 • Trevor Campbell, Tamara Broderick
We begin with an intuitive reformulation of Bayesian coreset construction as sparse vector sum approximation, and demonstrate that its automation and performance-based shortcomings arise from the use of the supremum norm.
no code implementations • 26 Jul 2017 • Trevor Campbell, Brian Kulis, Jonathan How
Bayesian nonparametrics are a class of probabilistic models in which the model size is inferred from data.
no code implementations • 16 Dec 2016 • Diana Cai, Trevor Campbell, Tamara Broderick
Many popular network models rely on the assumption of (vertex) exchangeability, in which the distribution of the graph is invariant to relabelings of the vertices.
no code implementations • CVPR 2015 • Julian Straub, Trevor Campbell, Jonathan P. How, John W. Fisher III
Based on the small-variance limit of Bayesian nonparametric von-Mises-Fisher (vMF) mixture distributions, we propose two new flexible and efficient k-means-like clustering algorithms for directional data such as surface normals.
2 code implementations • NeurIPS 2016 • Jonathan H. Huggins, Trevor Campbell, Tamara Broderick
We demonstrate the efficacy of our approach on a number of synthetic and real-world datasets, and find that, in practice, the size of the coreset is independent of the original dataset size.
no code implementations • CVPR 2017 • Julian Straub, Trevor Campbell, Jonathan P. How, John W. Fisher III
Point cloud alignment is a common problem in computer vision and robotics, with applications ranging from 3D object recognition to reconstruction.
no code implementations • NeurIPS 2015 • Trevor Campbell, Julian Straub, John W. Fisher III, Jonathan P. How
This paper presents a methodology for creating streaming, distributed inference algorithms for Bayesian nonparametric (BNP) models.
no code implementations • 28 Mar 2014 • Trevor Campbell, Jonathan P. How
The method first employs variational inference on each individual learning agent to generate a local approximate posterior, the agents transmit their local posteriors to other agents in the network, and finally each agent combines its set of received local posteriors.
1 code implementation • NeurIPS 2013 • Trevor Campbell, Miao Liu, Brian Kulis, Jonathan P. How, Lawrence Carin
This paper presents a novel algorithm, based upon the dependent Dirichlet process mixture model (DDPMM), for clustering batch-sequential data containing an unknown number of evolving clusters.