Search Results for author: Trevor Campbell

Found 32 papers, 16 papers with code

MCMC-driven learning

no code implementations14 Feb 2024 Alexandre Bouchard-Côté, Trevor Campbell, Geoff Pleiss, Nikola Surjanovic

This paper is intended to appear as a chapter for the Handbook of Markov Chain Monte Carlo.

Variational Inference

Coreset Markov Chain Monte Carlo

1 code implementation25 Oct 2023 Naitong Chen, Trevor Campbell

A Bayesian coreset is a small, weighted subset of data that replaces the full dataset during inference in order to reduce computational cost.

Mixed Variational Flows for Discrete Variables

1 code implementation29 Aug 2023 Gian Carlo Diluvi, Benjamin Bloem-Reddy, Trevor Campbell

First, we develop a measure-preserving and discrete (MAD) invertible map that leaves the discrete target invariant, and then create a mixed variational flow (MAD Mix) based on that map.

Machine Learning and the Future of Bayesian Computation

no code implementations21 Apr 2023 Steven Winter, Trevor Campbell, Lizhen Lin, Sanvesh Srivastava, David B. Dunson

Bayesian models are a powerful tool for studying complex data, allowing the analyst to encode rich hierarchical dependencies and leverage prior information.

Bayesian Inference Variational Inference

MixFlows: principled variational inference via mixed flows

2 code implementations16 May 2022 Zuheng Xu, Naitong Chen, Trevor Campbell

This work presents mixed variational flows (MixFlows), a new variational family that consists of a mixture of repeated applications of a map to an initial reference distribution.

Variational Inference

Fast Bayesian Coresets via Subsampling and Quasi-Newton Refinement

1 code implementation18 Mar 2022 Cian Naik, Judith Rousseau, Trevor Campbell

Bayesian coresets approximate a posterior distribution by building a small weighted subset of the data points.

Bayesian inference via sparse Hamiltonian flows

1 code implementation11 Mar 2022 Naitong Chen, Zuheng Xu, Trevor Campbell

A Bayesian coreset is a small, weighted subset of data that replaces the full dataset during Bayesian inference, with the goal of reducing computational cost.

Bayesian Inference

The computational asymptotics of Gaussian variational inference and the Laplace approximation

1 code implementation13 Apr 2021 Zuheng Xu, Trevor Campbell

Gaussian variational inference and the Laplace approximation are popular alternatives to Markov chain Monte Carlo that formulate Bayesian posterior inference as an optimization problem, enabling the use of simple and scalable stochastic optimization algorithms.

Bayesian Inference Stochastic Optimization +1

Parallel Tempering on Optimized Paths

1 code implementation15 Feb 2021 Saifuddin Syed, Vittorio Romaniello, Trevor Campbell, Alexandre Bouchard-Côté

Parallel tempering (PT) is a class of Markov chain Monte Carlo algorithms that constructs a path of distributions annealing between a tractable reference and an intractable target, and then interchanges states along the path to improve mixing in the target.

Computation 65C05

Bayesian Pseudocoresets

1 code implementation NeurIPS 2020 Dionysis Manousakas, Zuheng Xu, Cecilia Mascolo, Trevor Campbell

Standard Bayesian inference algorithms are prohibitively expensive in the regime of modern large-scale data.

Bayesian Inference

Physics-Informed Neural Network for Modelling the Thermochemical Curing Process of Composite-Tool Systems During Manufacture

1 code implementation27 Nov 2020 Sina Amini Niaki, Ehsan Haghighat, Trevor Campbell, Anoush Poursartip, Reza Vaziri

We present a Physics-Informed Neural Network (PINN) to simulate the thermochemical evolution of a composite material on a tool undergoing cure in an autoclave.

Transfer Learning

Power posteriors do not reliably learn the number of components in a finite mixture

no code implementations NeurIPS Workshop ICBINB 2020 Diana Cai, Trevor Campbell, Tamara Broderick

Increasingly, though, data science papers suggest potential alternatives beyond vanilla FMMs, such as power posteriors, coarsening, and related methods.

Finite mixture models do not reliably learn the number of components

no code implementations8 Jul 2020 Diana Cai, Trevor Campbell, Tamara Broderick

In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of components converges to 0 in the limit of infinite data.

Slice Sampling for General Completely Random Measures

no code implementations24 Jun 2020 Peiyuan Zhu, Alexandre Bouchard-Côté, Trevor Campbell

Completely random measures provide a principled approach to creating flexible unsupervised models, where the number of latent features is infinite and the number of features that influence the data grows with the size of the data set.

Validated Variational Inference via Practical Posterior Error Bounds

1 code implementation9 Oct 2019 Jonathan H. Huggins, Mikołaj Kasprzak, Trevor Campbell, Tamara Broderick

Finally, we demonstrate the utility of our proposed workflow and error bounds on a robust regression problem and on a real-data example with a widely used multilevel hierarchical model.

Bayesian Inference Variational Inference

Sparse Variational Inference: Bayesian Coresets from Scratch

1 code implementation NeurIPS 2019 Trevor Campbell, Boyan Beronov

But the automation of past coreset methods is limited because they depend on the availability of a reasonable coarse posterior approximation, which is difficult to specify in practice.

Variational Inference

Universal Boosting Variational Inference

2 code implementations NeurIPS 2019 Trevor Campbell, Xinglong Li

We show that for any target density and any mixture component family, the output of UBVI converges to the best possible approximation in the mixture family, even when the mixture family is misspecified.

Stochastic Optimization Variational Inference

Reconstructing probabilistic trees of cellular differentiation from single-cell RNA-seq data

no code implementations28 Nov 2018 Miriam Shiffman, William T. Stephenson, Geoffrey Schiebinger, Jonathan Huggins, Trevor Campbell, Aviv Regev, Tamara Broderick

Specifically, we extend the framework of the classical Dirichlet diffusion tree to simultaneously infer branch topology and latent cell states along continuous trajectories over the full tree.

Data-dependent compression of random features for large-scale kernel approximation

no code implementations9 Oct 2018 Raj Agrawal, Trevor Campbell, Jonathan H. Huggins, Tamara Broderick

Random feature maps (RFMs) and the Nystrom method both consider low-rank approximations to the kernel matrix as a potential solution.

feature selection

Scalable Gaussian Process Inference with Finite-data Mean and Variance Guarantees

no code implementations26 Jun 2018 Jonathan H. Huggins, Trevor Campbell, Mikołaj Kasprzak, Tamara Broderick

We develop an approach to scalable approximate GP regression with finite-data guarantees on the accuracy of pointwise posterior mean and variance estimates.

Gaussian Processes regression +1

Automated Scalable Bayesian Inference via Hilbert Coresets

2 code implementations13 Oct 2017 Trevor Campbell, Tamara Broderick

We begin with an intuitive reformulation of Bayesian coreset construction as sparse vector sum approximation, and demonstrate that its automation and performance-based shortcomings arise from the use of the supremum norm.

Bayesian Inference

Dynamic Clustering Algorithms via Small-Variance Analysis of Markov Chain Mixture Models

no code implementations26 Jul 2017 Trevor Campbell, Brian Kulis, Jonathan How

Bayesian nonparametrics are a class of probabilistic models in which the model size is inferred from data.

Clustering

Edge-exchangeable graphs and sparsity (NIPS 2016)

no code implementations16 Dec 2016 Diana Cai, Trevor Campbell, Tamara Broderick

Many popular network models rely on the assumption of (vertex) exchangeability, in which the distribution of the graph is invariant to relabelings of the vertices.

Small-Variance Nonparametric Clustering on the Hypersphere

no code implementations CVPR 2015 Julian Straub, Trevor Campbell, Jonathan P. How, John W. Fisher III

Based on the small-variance limit of Bayesian nonparametric von-Mises-Fisher (vMF) mixture distributions, we propose two new flexible and efficient k-means-like clustering algorithms for directional data such as surface normals.

Clustering Nonparametric Clustering +1

Coresets for Scalable Bayesian Logistic Regression

2 code implementations NeurIPS 2016 Jonathan H. Huggins, Trevor Campbell, Tamara Broderick

We demonstrate the efficacy of our approach on a number of synthetic and real-world datasets, and find that, in practice, the size of the coreset is independent of the original dataset size.

Bayesian Inference regression +1

Efficient Global Point Cloud Alignment using Bayesian Nonparametric Mixtures

no code implementations CVPR 2017 Julian Straub, Trevor Campbell, Jonathan P. How, John W. Fisher III

Point cloud alignment is a common problem in computer vision and robotics, with applications ranging from 3D object recognition to reconstruction.

3D Object Recognition

Approximate Decentralized Bayesian Inference

no code implementations28 Mar 2014 Trevor Campbell, Jonathan P. How

The method first employs variational inference on each individual learning agent to generate a local approximate posterior, the agents transmit their local posteriors to other agents in the network, and finally each agent combines its set of received local posteriors.

Bayesian Inference Variational Inference

Dynamic Clustering via Asymptotics of the Dependent Dirichlet Process Mixture

1 code implementation NeurIPS 2013 Trevor Campbell, Miao Liu, Brian Kulis, Jonathan P. How, Lawrence Carin

This paper presents a novel algorithm, based upon the dependent Dirichlet process mixture model (DDPMM), for clustering batch-sequential data containing an unknown number of evolving clusters.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.