Search Results for author: Diana Cai

Found 15 papers, 2 papers with code

Batch and match: black-box variational inference with a score-based divergence

no code implementations22 Feb 2024 Diana Cai, Chirag Modi, Loucas Pillaud-Vivien, Charles C. Margossian, Robert M. Gower, David M. Blei, Lawrence K. Saul

We analyze the convergence of BaM when the target distribution is Gaussian, and we prove that in the limit of infinite batch size the variational parameter updates converge exponentially quickly to the target mean and covariance.

Variational Inference

Kernel Density Bayesian Inverse Reinforcement Learning

1 code implementation13 Mar 2023 Aishwarya Mandyam, Didong Li, Diana Cai, Andrew Jones, Barbara E. Engelhardt

Inverse reinforcement learning~(IRL) is a powerful framework to infer an agent's reward function by observing its behavior, but IRL algorithms that learn point estimates of the reward function can be misleading because there may be several functions that describe an agent's behavior equally well.

BIRL Density Estimation +2

Multi-fidelity Monte Carlo: a pseudo-marginal approach

no code implementations4 Oct 2022 Diana Cai, Ryan P. Adams

A key challenge in applying MCMC to scientific domains is computation: the target density of interest is often a function of expensive computations, such as a high-fidelity physical simulation, an intractable integral, or a slowly-converging iterative algorithm.

Uncertainty Quantification

Slice Sampling Reparameterization Gradients

no code implementations NeurIPS 2021 David Zoltowski, Diana Cai, Ryan P. Adams

Slice sampling is a Markov chain Monte Carlo algorithm for simulating samples from probability distributions; it only requires a density function that can be evaluated point-wise up to a normalization constant, making it applicable to a variety of inference problems and unnormalized models.

Active multi-fidelity Bayesian online changepoint detection

1 code implementation26 Mar 2021 Gregory W. Gundersen, Diana Cai, Chuteng Zhou, Barbara E. Engelhardt, Ryan P. Adams

We propose a multi-fidelity approach that makes cost-sensitive decisions about which data fidelity to collect based on maximizing information gain with respect to changepoints.

Edge-computing Time Series +1

Power posteriors do not reliably learn the number of components in a finite mixture

no code implementations NeurIPS Workshop ICBINB 2020 Diana Cai, Trevor Campbell, Tamara Broderick

Increasingly, though, data science papers suggest potential alternatives beyond vanilla FMMs, such as power posteriors, coarsening, and related methods.

Finite mixture models do not reliably learn the number of components

no code implementations8 Jul 2020 Diana Cai, Trevor Campbell, Tamara Broderick

In this paper, we add rigor to data-analysis folk wisdom by proving that under even the slightest model misspecification, the FMM component-count posterior diverges: the posterior probability of any particular finite number of components converges to 0 in the limit of infinite data.

Weighted Meta-Learning

no code implementations20 Mar 2020 Diana Cai, Rishit Sheth, Lester Mackey, Nicolo Fusi

Meta-learning leverages related source tasks to learn an initialization that can be quickly fine-tuned to a target task with limited labeled examples.

Meta-Learning

A Bayesian Nonparametric View on Count-Min Sketch

no code implementations NeurIPS 2018 Diana Cai, Michael Mitzenmacher, Ryan P. Adams

The count-min sketch is a time- and memory-efficient randomized data structure that provides a point estimate of the number of times an item has appeared in a data stream.

Edge-exchangeable graphs and sparsity (NIPS 2016)

no code implementations16 Dec 2016 Diana Cai, Trevor Campbell, Tamara Broderick

Many popular network models rely on the assumption of (vertex) exchangeability, in which the distribution of the graph is invariant to relabelings of the vertices.

Edge-exchangeable graphs and sparsity

no code implementations NeurIPS 2016 Tamara Broderick, Diana Cai

We show that, unlike node exchangeability, edge exchangeability encompasses models that are known to provide a projective sequence of random graphs that circumvent the Aldous-Hoover Theorem and exhibit sparsity, i. e., sub-quadratic growth of the number of edges with the number of nodes.

Clustering

Completely random measures for modeling power laws in sparse graphs

no code implementations22 Mar 2016 Diana Cai, Tamara Broderick

Since individual network datasets continue to grow in size, it is necessary to develop models that accurately represent the real-life scaling properties of networks.

Clustering

Priors on exchangeable directed graphs

no code implementations28 Oct 2015 Diana Cai, Nathanael Ackerman, Cameron Freer

Directed graphs occur throughout statistical modeling of networks, and exchangeability is a natural assumption when the ordering of vertices does not matter.

An iterative step-function estimator for graphons

no code implementations5 Dec 2014 Diana Cai, Nathanael Ackerman, Cameron Freer

Exchangeable graphs arise via a sampling procedure from measurable functions known as graphons.

Clustering Graphon Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.