Search Results for author: Andrew Gelman

Found 21 papers, 17 papers with code

Artificial Intelligence and Aesthetic Judgment

no code implementations21 Aug 2023 Jessica Hullman, Ari Holtzman, Andrew Gelman

In this essay, we focus on an unresolved tension when we bring this dilemma to bear in the context of generative AI: are we looking for proof that generated media reflects something about the conditions that created it or some eternal human essence?

Causal Inference

Federated Learning as Variational Inference: A Scalable Expectation Propagation Approach

1 code implementation8 Feb 2023 Han Guo, Philip Greengard, Hongyi Wang, Andrew Gelman, Yoon Kim, Eric P. Xing

A recent alternative formulation instead treats federated learning as a distributed inference problem, where the goal is to infer a global posterior from partitioned client data (Al-Shedivat et al., 2021).

Distributed Optimization Federated Learning +1

The worst of both worlds: A comparative analysis of errors in learning from data in psychology and machine learning

no code implementations12 Mar 2022 Jessica Hullman, Sayash Kapoor, Priyanka Nanayakkara, Andrew Gelman, Arvind Narayanan

We conclude by discussing risks that arise when sources of errors are misdiagnosed and the need to acknowledge the role of human inductive biases in learning and reform.

Causal Inference

Toward a Taxonomy of Trust for Probabilistic Machine Learning

no code implementations5 Dec 2021 Tamara Broderick, Andrew Gelman, Rachael Meager, Anna L. Smith, Tian Zheng

Probabilistic machine learning increasingly informs critical decisions in medicine, economics, politics, and beyond.

BIG-bench Machine Learning Translation

Pathfinder: Parallel quasi-Newton variational inference

5 code implementations9 Aug 2021 Lu Zhang, Bob Carpenter, Andrew Gelman, Aki Vehtari

Pathfinder returns draws from the approximation with the lowest estimated Kullback-Leibler (KL) divergence to the true posterior.

Pathfinder Variational Inference

Bayesian hierarchical stacking: Some models are (somewhere) useful

1 code implementation22 Jan 2021 Yuling Yao, Gregor Pirš, Aki Vehtari, Andrew Gelman

We show that stacking is most effective when model predictive performance is heterogeneous in inputs, and we can further improve the stacked mixture with a hierarchical model.

Bayesian Inference Time Series +1

What are the most important statistical ideas of the past 50 years?

1 code implementation30 Nov 2020 Andrew Gelman, Aki Vehtari

We review the most important statistical ideas of the past half century, which we categorize as: counterfactual causal inference, bootstrapping and simulation-based inference, overparameterized models and regularization, Bayesian multilevel models, generic computation algorithms, adaptive decision analysis, robust inference, and exploratory data analysis.

Causal Inference Methodology

Adaptive Path Sampling in Metastable Posterior Distributions

1 code implementation1 Sep 2020 Yuling Yao, Collin Cademartori, Aki Vehtari, Andrew Gelman

The normalizing constant plays an important role in Bayesian computation, and there is a large literature on methods for computing or approximating normalizing constants that cannot be evaluated in closed form.

Computation Methodology

Stacking for Non-mixing Bayesian Computations: The Curse and Blessing of Multimodal Posteriors

1 code implementation22 Jun 2020 Yuling Yao, Aki Vehtari, Andrew Gelman

When working with multimodal Bayesian posterior distributions, Markov chain Monte Carlo (MCMC) algorithms have difficulty moving between modes, and default variational or mode-based approximate inferences will understate posterior uncertainty.

Bayesian Inference regression +2

Improving multilevel regression and poststratification with structured priors

2 code implementations19 Aug 2019 Yuxiang Gao, Lauren Kennedy, Daniel Simpson, Andrew Gelman

A central theme in the field of survey statistics is estimating population-level quantities through data coming from potentially non-representative samples of the population.

Methodology

Rank-normalization, folding, and localization: An improved $\widehat{R}$ for assessing convergence of MCMC

2 code implementations19 Mar 2019 Aki Vehtari, Andrew Gelman, Daniel Simpson, Bob Carpenter, Paul-Christian Bürkner

In this paper we show that the convergence diagnostic $\widehat{R}$ of Gelman and Rubin (1992) has serious flaws.

Computation Methodology

Yes, but Did It Work?: Evaluating Variational Inference

1 code implementation ICML 2018 Yuling Yao, Aki Vehtari, Daniel Simpson, Andrew Gelman

While it's always possible to compute a variational approximation to a posterior distribution, it can be difficult to discover problems with this approximation.

Variational Inference

Visualization in Bayesian workflow

2 code implementations5 Sep 2017 Jonah Gabry, Daniel Simpson, Aki Vehtari, Michael Betancourt, Andrew Gelman

Bayesian data analysis is about more than just computing a posterior distribution, and Bayesian visualization is about more than trace plots of Markov chains.

Methodology Applications

Using stacking to average Bayesian predictive distributions

2 code implementations6 Apr 2017 Yuling Yao, Aki Vehtari, Daniel Simpson, Andrew Gelman

The widely recommended procedure of Bayesian model averaging is flawed in the M-open setting in which the true data-generating process is not one of the candidate models being fit.

Methodology Computation

Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC

7 code implementations16 Jul 2015 Aki Vehtari, Andrew Gelman, Jonah Gabry

Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values.

Computation Methodology

Pareto Smoothed Importance Sampling

9 code implementations9 Jul 2015 Aki Vehtari, Daniel Simpson, Andrew Gelman, Yuling Yao, Jonah Gabry

Importance weighting is a general way to adjust Monte Carlo integration to account for draws from the wrong distribution, but the resulting estimate can be highly variable when the importance ratios have a heavy right tail.

Expectation propagation as a way of life: A framework for Bayesian inference on partitioned data

2 code implementations16 Dec 2014 Aki Vehtari, Andrew Gelman, Tuomas Sivula, Pasi Jylänki, Dustin Tran, Swupnil Sahai, Paul Blomstedt, John P. Cunningham, David Schiminovich, Christian Robert

A common divide-and-conquer approach for Bayesian computation with big data is to partition the data, perform local inference for each piece separately, and combine the results to obtain a global posterior approximation.

Bayesian Inference

The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo

8 code implementations18 Nov 2011 Matthew D. Hoffman, Andrew Gelman

Hamiltonian Monte Carlo (HMC) is a Markov chain Monte Carlo (MCMC) algorithm that avoids the random walk behavior and sensitivity to correlated parameters that plague many MCMC methods by taking a series of steps informed by first-order gradient information.

Cannot find the paper you are looking for? You can Submit a new open access paper.