1 code implementation • 24 Mar 2022 • Dhanya Sridhar, Caterina De Bacco, David Blei
We consider the problem of estimating social influence, the effect that a person's behavior has on the future behavior of their peers.
no code implementations • NeurIPS 2021 • Yixin Wang, David Blei, John P. Cunningham
Existing approaches to posterior collapse oftenattribute it to the use of neural networks or optimization issues dueto variational approximation.
1 code implementation • 31 May 2021 • Antonio Khalil Moretti, Liyi Zhang, Christian A. Naesseth, Hadiah Venner, David Blei, Itsik Pe'er
Bayesian phylogenetic inference is often conducted via local or sequential search over topologies and branch lengths using algorithms such as random-walk Markov chain Monte Carlo (MCMC) or Combinatorial Sequential Monte Carlo (CSMC).
1 code implementation • 28 Feb 2021 • Luhuan Wu, Andrew Miller, Lauren Anderson, Geoff Pleiss, David Blei, John Cunningham
In this work, we introduce the hierarchical inducing point GP (HIP-GP), a scalable inter-domain GP inference method that enables us to improve the approximation accuracy by increasing the number of inducing points to the millions.
1 code implementation • 24 Nov 2020 • Claudia Shi, Victor Veitch, David Blei
To address this challenge, practitioners collect and adjust for the covariates, hoping that they adequately correct for confounding.
no code implementations • NeurIPS 2020 • Christian A. Naesseth, Fredrik Lindsten, David Blei
Modern variational inference (VI) uses stochastic gradients to avoid intractable expectations, enabling large-scale probabilistic inference in complex models.
no code implementations • 11 Mar 2020 • Jackson Loper, David Blei, John P. Cunningham, Liam Paninski
Gaussian Processes (GPs) provide powerful probabilistic frameworks for interpolation, forecasting, and smoothing, but have been hampered by computational scaling issues.
no code implementations • 6 Jun 2019 • Rob Donnelly, Francisco R. Ruiz, David Blei, Susan Athey
One source of the improvement is the ability of the model to accurately estimate heterogeneity in preferences (by pooling information across categories); another source of improvement is its ability to estimate the preferences of consumers who have rarely or never made a purchase in a given category in the training data.
1 code implementation • ICML 2018 • Francisco Ruiz, Michalis Titsias, Adji Bousso Dieng, David Blei
It maximizes a lower bound on the marginal likelihood of the data.
no code implementations • 22 Jan 2018 • Susan Athey, David Blei, Robert Donnelly, Francisco Ruiz, Tobias Schmidt
The data is used to identify users' approximate typical morning location, as well as their choices of lunchtime restaurants.
no code implementations • ICLR 2018 • Maja Rudolph, Francisco Ruiz, David Blei
Most embedding methods rely on a log-bilinear model to predict the occurrence of a word in a context of other words.
no code implementations • NeurIPS 2017 • Adji Bousso Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, David Blei
In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes $D_{\chi}(p || q)$, the $\chi$-divergence from $p$ to $q$.
1 code implementation • NeurIPS 2017 • Liping Liu, Francisco Ruiz, Susan Athey, David Blei
Embedding models consider the probability of a target observation (a word or an item) conditioned on the elements in the context (other words or items).
1 code implementation • NeurIPS 2017 • Maja Rudolph, Francisco Ruiz, Susan Athey, David Blei
Here we develop structured exponential family embeddings (S-EFE), a method for discovering embeddings that vary across related groups of data.
1 code implementation • 23 Mar 2017 • Maja Rudolph, David Blei
Word embeddings are a powerful approach for unsupervised analysis of language.
no code implementations • NeurIPS 2016 • Francisco R. Ruiz, Michalis Titsias Rc Aueb, David Blei
The reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective.
no code implementations • 6 Aug 2016 • Rajesh Ranganath, Adler Perotte, Noémie Elhadad, David Blei
The electronic health record (EHR) provides an unprecedented opportunity to build actionable tools to support physicians at the point of care.
no code implementations • NeurIPS 2015 • James Mcinerney, Rajesh Ranganath, David Blei
Many modern data analysis problems involve inferences from streaming data.
no code implementations • 2 Jul 2015 • Rajesh Ranganath, David Blei
We develop correlated random measures, random measures where the atom weights can exhibit a flexible pattern of dependence, and use them to develop powerful hierarchical Bayesian nonparametric models.
3 code implementations • NeurIPS 2014 • Prem K. Gopalan, Laurent Charlin, David Blei
We develop collaborative topic Poisson factorization (CTPF), a generative model of articles and reader preferences.
no code implementations • NeurIPS 2014 • Neil Houlsby, David Blei
Stochastic variational inference (SVI) uses stochastic optimization to scale up Bayesian computation to massive data.
no code implementations • 7 Nov 2014 • Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, David Blei
Lastly, we develop local variational tempering, which assigns a latent temperature to each data point; this allows for dynamic annealing that varies across data.
no code implementations • NeurIPS 2014 • Stephan Mandt, David Blei
It uses stochastic optimization to fit a variational distribution, following easy-to-compute noisy natural gradients.
no code implementations • NeurIPS 2013 • Dae Il Kim, Prem K. Gopalan, David Blei, Erik Sudderth
In large social networks, we expect entities to participate in multiple communities, and the number of communities to grow with the network size.
no code implementations • NeurIPS 2013 • Prem K. Gopalan, Chong Wang, David Blei
We evaluate the link prediction accuracy of our algorithm on eight real-world networks with up to 60, 000 nodes, and 24 benchmark networks.
no code implementations • 27 Jun 2012 • John Paisley, David Blei, Michael Jordan
This requires the ability to integrate a sum of terms in the log joint likelihood using this factorized distribution.
no code implementations • 13 Jun 2012 • Chong Wang, David Blei, David Heckerman
In contrast to the cDTM, the original discrete-time dynamic topic model (dDTM) requires that time be discretized.