no code implementations • 31 Oct 2024 • Dominic Sobhani, Amir Feder, David Blei

Probabilistic topic models are a powerful tool for extracting latent themes from large text datasets.

1 code implementation • 11 Jun 2024 • Nicolas Beltran-Velez, Alessandro Antonio Grande, Achille Nazaret, Alp Kucukelbir, David Blei

In this paper, we propose Treeffuser, an easy-to-use method for probabilistic prediction on tabular data.

no code implementations • 11 Jun 2024 • Andrew Jesson, Nicolas Beltran-Velez, Quentin Chu, Sweta Karlekar, Jannik Kossen, Yarin Gal, John P. Cunningham, David Blei

With this perspective, we define a \textit{hallucination} as a generated response to the prediction question that has low-probability under the model likelihood indexed by the mechanism.

no code implementations • 14 Apr 2024 • Bohan Wu, David Blei

Variational inference (VI) has emerged as a popular method for approximate inference for high-dimensional Bayesian models.

1 code implementation • 17 Nov 2023 • Achille Nazaret, Justin Hong, Elham Azizi, David Blei

We find that SDCD outperforms existing methods in both convergence speed and accuracy and can scale to thousands of variables.

no code implementations • NeurIPS 2023 • Amir Feder, Yoav Wald, Claudia Shi, Suchi Saria, David Blei

The reliance of text classifiers on spurious correlations can lead to poor generalization at deployment, raising concerns about their use in safety-critical domains such as healthcare.

2 code implementations • NeurIPS 2023 • Chirag Modi, Charles Margossian, Yuling Yao, Robert Gower, David Blei, Lawrence Saul

We study how GSM-VI behaves as a function of the problem dimensionality, the condition number of the target covariance matrix (when the target is Gaussian), and the degree of mismatch between the approximating and exact posterior distribution.

1 code implementation • 21 Sep 2022 • Achille Nazaret, David Blei

We introduce the unbounded depth neural network (UDN), an infinitely deep probabilistic model that adapts its complexity to the training data.

no code implementations • 19 Jul 2022 • Sachit Menon, David Blei, Carl Vondrick

Variational autoencoders (VAEs) suffer from posterior collapse, where the powerful neural networks used for modeling and inference optimize the objective without meaningfully using the latent representation.

no code implementations • 28 Jun 2022 • Chirag Modi, Yin Li, David Blei

We show that after a short initial warm-up and training phase, VBS generates better quality of samples than simple VI approaches and reduces the correlation length in the sampling phase by a factor of 10-50 over using only HMC to explore the posterior of initial conditions in 64$^3$ and 128$^3$ dimensional problems, with larger gains for high signal-to-noise data observations.

1 code implementation • 24 Mar 2022 • Dhanya Sridhar, Caterina De Bacco, David Blei

We consider the problem of estimating social influence, the effect that a person's behavior has on the future behavior of their peers.

1 code implementation • 31 May 2021 • Antonio Khalil Moretti, Liyi Zhang, Christian A. Naesseth, Hadiah Venner, David Blei, Itsik Pe'er

Bayesian phylogenetic inference is often conducted via local or sequential search over topologies and branch lengths using algorithms such as random-walk Markov chain Monte Carlo (MCMC) or Combinatorial Sequential Monte Carlo (CSMC).

1 code implementation • 28 Feb 2021 • Luhuan Wu, Andrew Miller, Lauren Anderson, Geoff Pleiss, David Blei, John Cunningham

In this work, we introduce the hierarchical inducing point GP (HIP-GP), a scalable inter-domain GP inference method that enables us to improve the approximation accuracy by increasing the number of inducing points to the millions.

1 code implementation • 24 Nov 2020 • Claudia Shi, Victor Veitch, David Blei

To address this challenge, practitioners collect and adjust for the covariates, hoping that they adequately correct for confounding.

no code implementations • NeurIPS 2020 • Christian A. Naesseth, Fredrik Lindsten, David Blei

Modern variational inference (VI) uses stochastic gradients to avoid intractable expectations, enabling large-scale probabilistic inference in complex models.

no code implementations • 11 Mar 2020 • Jackson Loper, David Blei, John P. Cunningham, Liam Paninski

Gaussian Processes (GPs) provide powerful probabilistic frameworks for interpolation, forecasting, and smoothing, but have been hampered by computational scaling issues.

1 code implementation • 6 Jun 2019 • Rob Donnelly, Francisco R. Ruiz, David Blei, Susan Athey

One source of the improvement is the ability of the model to accurately estimate heterogeneity in preferences (by pooling information across categories); another source of improvement is its ability to estimate the preferences of consumers who have rarely or never made a purchase in a given category in the training data.

1 code implementation • ICML 2018 • Francisco Ruiz, Michalis Titsias, Adji Bousso Dieng, David Blei

It maximizes a lower bound on the marginal likelihood of the data.

no code implementations • 22 Jan 2018 • Susan Athey, David Blei, Robert Donnelly, Francisco Ruiz, Tobias Schmidt

The data is used to identify users' approximate typical morning location, as well as their choices of lunchtime restaurants.

no code implementations • ICLR 2018 • Maja Rudolph, Francisco Ruiz, David Blei

Most embedding methods rely on a log-bilinear model to predict the occurrence of a word in a context of other words.

no code implementations • NeurIPS 2017 • Adji Bousso Dieng, Dustin Tran, Rajesh Ranganath, John Paisley, David Blei

In this paper we propose CHIVI, a black-box variational inference algorithm that minimizes $D_{\chi}(p || q)$, the $\chi$-divergence from $p$ to $q$.

1 code implementation • NeurIPS 2017 • Liping Liu, Francisco Ruiz, Susan Athey, David Blei

Embedding models consider the probability of a target observation (a word or an item) conditioned on the elements in the context (other words or items).

1 code implementation • NeurIPS 2017 • Maja Rudolph, Francisco Ruiz, Susan Athey, David Blei

Here we develop structured exponential family embeddings (S-EFE), a method for discovering embeddings that vary across related groups of data.

1 code implementation • 23 Mar 2017 • Maja Rudolph, David Blei

Word embeddings are a powerful approach for unsupervised analysis of language.

no code implementations • NeurIPS 2016 • Francisco R. Ruiz, Michalis Titsias Rc Aueb, David Blei

The reparameterization gradient has become a widely used method to obtain Monte Carlo gradients to optimize the variational objective.

no code implementations • 6 Aug 2016 • Rajesh Ranganath, Adler Perotte, Noémie Elhadad, David Blei

The electronic health record (EHR) provides an unprecedented opportunity to build actionable tools to support physicians at the point of care.

no code implementations • NeurIPS 2015 • James Mcinerney, Rajesh Ranganath, David Blei

Many modern data analysis problems involve inferences from streaming data.

no code implementations • 2 Jul 2015 • Rajesh Ranganath, David Blei

We develop correlated random measures, random measures where the atom weights can exhibit a flexible pattern of dependence, and use them to develop powerful hierarchical Bayesian nonparametric models.

no code implementations • NeurIPS 2014 • Neil Houlsby, David Blei

Stochastic variational inference (SVI) uses stochastic optimization to scale up Bayesian computation to massive data.

3 code implementations • NeurIPS 2014 • Prem K. Gopalan, Laurent Charlin, David Blei

We develop collaborative topic Poisson factorization (CTPF), a generative model of articles and reader preferences.

no code implementations • 7 Nov 2014 • Stephan Mandt, James McInerney, Farhan Abrol, Rajesh Ranganath, David Blei

Lastly, we develop local variational tempering, which assigns a latent temperature to each data point; this allows for dynamic annealing that varies across data.

no code implementations • NeurIPS 2014 • Stephan Mandt, David Blei

It uses stochastic optimization to fit a variational distribution, following easy-to-compute noisy natural gradients.

no code implementations • NeurIPS 2013 • Dae Il Kim, Prem K. Gopalan, David Blei, Erik Sudderth

In large social networks, we expect entities to participate in multiple communities, and the number of communities to grow with the network size.

no code implementations • NeurIPS 2013 • Prem K. Gopalan, Chong Wang, David Blei

We evaluate the link prediction accuracy of our algorithm on eight real-world networks with up to 60, 000 nodes, and 24 benchmark networks.

no code implementations • 27 Jun 2012 • John Paisley, David Blei, Michael Jordan

This requires the ability to integrate a sum of terms in the log joint likelihood using this factorized distribution.

no code implementations • 13 Jun 2012 • Chong Wang, David Blei, David Heckerman

In contrast to the cDTM, the original discrete-time dynamic topic model (dDTM) requires that time be discretized.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.