1 code implementation • 29 Mar 2022 • Manushi Welandawe, Michael Riis Andersen, Aki Vehtari, Jonathan H. Huggins
RAABBVI adaptively decreases the learning rate by detecting convergence of the fixed--learning-rate iterates, then estimates the symmetrized Kullback--Leiber (KL) divergence between the current variational approximation and the optimal one.
5 code implementations • 9 Aug 2021 • Lu Zhang, Bob Carpenter, Andrew Gelman, Aki Vehtari
Pathfinder returns draws from the approximation with the lowest estimated Kullback-Leibler (KL) divergence to the true posterior.
no code implementations • ICML Workshop INNF 2021 • Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael Riis Andersen, Jonathan H. Huggins, Aki Vehtari
Current black-box variational inference (BBVI) methods require the user to make numerous design choices---such as the selection of variational objective and approximating family---yet there is little principled guidance on how to do so.
no code implementations • NeurIPS 2021 • Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael Riis Andersen, Jonathan H. Huggins, Aki Vehtari
Our framework and supporting experiments help to distinguish between the behavior of BBVI methods for approximating low-dimensional versus moderate-to-high-dimensional posteriors.
no code implementations • NeurIPS 2021 • Akash Kumar Dhaka, Alejandro Catalina, Manushi Welandawe, Michael Riis Andersen, Jonathan Huggins, Aki Vehtari
Our framework and supporting experiments help to distinguish between the behavior of BBVI methods for approximating low-dimensional versus moderate-to-high-dimensional posteriors.
1 code implementation • 22 Jan 2021 • Yuling Yao, Gregor Pirš, Aki Vehtari, Andrew Gelman
We show that stacking is most effective when model predictive performance is heterogeneous in inputs, and we can further improve the stacked mixture with a hierarchical model.
1 code implementation • 31 Dec 2020 • Eero Siivola, Javier Gonzalez, Andrei Paleyes, Aki Vehtari
The increasing availability of structured but high dimensional data has opened new opportunities for optimization.
1 code implementation • 30 Nov 2020 • Andrew Gelman, Aki Vehtari
We review the most important statistical ideas of the past half century, which we categorize as: counterfactual causal inference, bootstrapping and simulation-based inference, overparameterized models and regularization, Bayesian multilevel models, generic computation algorithms, adaptive decision analysis, robust inference, and exploratory data analysis.
Causal Inference Methodology
1 code implementation • 14 Oct 2020 • Alejandro Catalina, Paul-Christian Bürkner, Aki Vehtari
Projection predictive inference is a decision theoretic Bayesian approach that decouples model estimation from decision making.
Methodology Computation
1 code implementation • 1 Sep 2020 • Yuling Yao, Collin Cademartori, Aki Vehtari, Andrew Gelman
The normalizing constant plays an important role in Bayesian computation, and there is a large literature on methods for computing or approximating normalizing constants that cannot be evaluated in closed form.
Computation Methodology
no code implementations • NeurIPS 2020 • Akash Kumar Dhaka, Alejandro Catalina, Michael Riis Andersen, Måns Magnusson, Jonathan H. Huggins, Aki Vehtari
We consider the problem of fitting variational posterior approximations using stochastic optimization methods.
1 code implementation • 25 Aug 2020 • Tuomas Sivula, Måns Magnusson, Aki Vehtari
We show that it is possible to construct an unbiased estimator considering a specific predictive performance measure and model.
Methodology
1 code implementation • 24 Aug 2020 • Tuomas Sivula, Måns Magnusson, Aki Vehtari
We show that it is possible that the problematic skewness of the error distribution, which occurs when the models make similar predictions, does not fade away when the data size grows to infinity in certain situations.
Methodology
1 code implementation • 22 Jun 2020 • Yuling Yao, Aki Vehtari, Andrew Gelman
When working with multimodal Bayesian posterior distributions, Markov chain Monte Carlo (MCMC) algorithms have difficulty moving between modes, and default variational or mode-based approximate inferences will understate posterior uncertainty.
1 code implementation • 6 May 2020 • Topi Paananen, Alejandro Catalina, Paul-Christian Bürkner, Aki Vehtari
Many data sets contain an inherent multilevel structure, for example, because of repeated measurements of the same observational units.
no code implementations • 25 Mar 2020 • Eero Siivola, Akash Kumar Dhaka, Michael Riis Andersen, Javier Gonzalez, Pablo Garcia Moreno, Aki Vehtari
This direction has been mainly driven by the use of BO in machine learning hyper-parameter configuration problems.
1 code implementation • 7 Dec 2019 • Juho Timonen, Henrik Mannerström, Aki Vehtari, Harri Lähdesmäki
The lgpr tool is implemented as a comprehensive and user-friendly R-package.
1 code implementation • 21 Oct 2019 • Homayun Afrabandpey, Tomi Peltola, Juho Piironen, Aki Vehtari, Samuel Kaski
Through experiments on real-word data sets, using decision trees as interpretable models and Bayesian additive regression models as reference models, we show that for the same level of interpretability, our approach generates more accurate models than the alternative of restricting the prior.
1 code implementation • 17 Oct 2019 • Topi Paananen, Michael Riis Andersen, Aki Vehtari
For nonlinear supervised learning models, assessing the importance of predictor variables or their interactions is not straightforward because it can vary in the domain of the variables.
no code implementations • pproximateinference AABI Symposium 2019 • Marko Järvenpää, Aki Vehtari, Pekka Marttinen
Surrogate models can be used to accelerate approximate Bayesian computation (ABC).
no code implementations • 14 Oct 2019 • Marko Järvenpää, Aki Vehtari, Pekka Marttinen
We propose a numerical method to fully quantify the uncertainty in, for example, ABC posterior moments.
2 code implementations • 20 Jun 2019 • Topi Paananen, Juho Piironen, Paul-Christian Bürkner, Aki Vehtari
Adaptive importance sampling is a class of techniques for finding good proposal distributions for importance sampling.
1 code implementation • 28 May 2019 • Ben Bales, Arya Pourzanjani, Aki Vehtari, Linda Petzold
We present a selection criterion for the Euclidean metric adapted during warmup in a Hamiltonian Monte Carlo sampler that makes it possible for a sampler to automatically pick the metric based on the model and the availability of warmup draws.
Computation Methodology
1 code implementation • 3 May 2019 • Marko Järvenpää, Michael Gutmann, Aki Vehtari, Pekka Marttinen
We consider Bayesian inference when only a limited number of noisy log-likelihood evaluations can be obtained.
no code implementations • 24 Apr 2019 • Måns Magnusson, Michael Riis Andersen, Johan Jonasson, Aki Vehtari
Model inference, such as model comparison, model checking, and model selection, is an important part of model development.
1 code implementation • 10 Apr 2019 • Iiris Sundin, Peter Schulam, Eero Siivola, Aki Vehtari, Suchi Saria, Samuel Kaski
Machine learning can help personalized decision support by learning models to predict individual treatment effects (ITE).
2 code implementations • 19 Mar 2019 • Aki Vehtari, Andrew Gelman, Daniel Simpson, Bob Carpenter, Paul-Christian Bürkner
In this paper we show that the convergence diagnostic $\widehat{R}$ of Gelman and Rubin (1992) has serious flaws.
Computation Methodology
1 code implementation • 17 Feb 2019 • Paul-Christian Bürkner, Jonah Gabry, Aki Vehtari
One of the common goals of time series analysis is to use the observed series to inform predictions for future observations.
Methodology
1 code implementation • 4 Oct 2018 • Juho Piironen, Markus Paasiniemi, Aki Vehtari
This paper discusses predictive inference and feature selection for generalized linear models with scarce but high-dimensional data.
6 code implementations • 18 Apr 2018 • Sean Talts, Michael Betancourt, Daniel Simpson, Aki Vehtari, Andrew Gelman
Verifying the correctness of Bayesian computation is challenging.
Methodology
1 code implementation • ICML 2018 • Yuling Yao, Aki Vehtari, Daniel Simpson, Andrew Gelman
While it's always possible to compute a variational approximation to a posterior distribution, it can be difficult to discover problems with this approximation.
1 code implementation • 17 Jan 2018 • Donald R. Williams, Juho Piironen, Aki Vehtari, Philippe Rast
Here we introduce a Bayesian method for estimating sparse matrices, in which conditional relationships are determined with projection predictive selection.
Applications Methodology
2 code implementations • 21 Dec 2017 • Topi Paananen, Juho Piironen, Michael Riis Andersen, Aki Vehtari
Variable selection for Gaussian process models is often done using automatic relevance determination, which uses the inverse length-scale parameter of each input variable as a proxy for variable relevance.
no code implementations • 17 Oct 2017 • Juho Piironen, Aki Vehtari
In high-dimensional prediction problems, where the number of features may greatly exceed the number of training instances, fully Bayesian approach with a sparsifying prior is known to produce good results but is computationally challenging.
Methodology
1 code implementation • 13 Oct 2017 • Pedram Daee, Tomi Peltola, Aki Vehtari, Samuel Kaski
In human-in-the-loop machine learning, the user provides information beyond that in the training data.
2 code implementations • 5 Sep 2017 • Jonah Gabry, Daniel Simpson, Aki Vehtari, Michael Betancourt, Andrew Gelman
Bayesian data analysis is about more than just computing a posterior distribution, and Bayesian visualization is about more than trace plots of Markov chains.
Methodology Applications
2 code implementations • 2 Aug 2017 • Jarno Lintusaari, Henri Vuollekoski, Antti Kangasrääsiö, Kusti Skytén, Marko Järvenpää, Pekka Marttinen, Michael U. Gutmann, Aki Vehtari, Jukka Corander, Samuel Kaski
The stand-alone ELFI graph can be used with any of the available inference methods without modifications.
no code implementations • 14 Jun 2017 • Olli-Pekka Koistinen, Freyja B. Dagbjartsdóttir, Vilhjálmur Ásgeirsson, Aki Vehtari, Hannes Jónsson
A Gaussian process model also provides an uncertainty estimate for the approximate energy surface, and this can be used to focus the calculations on the lesser-known part of the path, thereby reducing the number of needed energy and force evaluations to a half in the present calculations.
2 code implementations • 6 Apr 2017 • Yuling Yao, Aki Vehtari, Daniel Simpson, Andrew Gelman
The widely recommended procedure of Bayesian model averaging is flawed in the M-open setting in which the true data-generating process is not one of the candidate models being fit.
Methodology Computation
1 code implementation • 4 Apr 2017 • Eero Siivola, Aki Vehtari, Jarno Vanhatalo, Javier González, Michael Riis Andersen
Bayesian optimization (BO) is a global optimization strategy designed to find the minimum of an expensive black-box function, typically defined on a compact subset of $\mathcal{R}^d$, by using a Gaussian process (GP) as a surrogate model for the objective.
no code implementations • 3 Apr 2017 • Marko Järvenpää, Michael U. Gutmann, Arijus Pleska, Aki Vehtari, Pekka Marttinen
We propose to compute the uncertainty in the ABC posterior density, which is due to a lack of simulations to estimate this quantity accurately, and define a loss function that measures this uncertainty.
no code implementations • 30 Mar 2017 • Olli-Pekka Koistinen, Emile Maras, Aki Vehtari, Hannes Jónsson
The calculation of minimum energy paths for transitions such as atomic and/or spin re-arrangements is an important task in many contexts and can often be used to determine the mechanism and rate of transitions.
no code implementations • 20 Oct 2016 • Marko Järvenpää, Michael Gutmann, Aki Vehtari, Pekka Marttinen
Approximate Bayesian computation (ABC) can be used for model fitting when the likelihood function is intractable but simulating from the model is feasible.
1 code implementation • 18 Apr 2016 • Alan D. Saul, James Hensman, Aki Vehtari, Neil D. Lawrence
Gaussian process models are flexible, Bayesian non-parametric approaches to regression.
no code implementations • 15 Sep 2015 • Michael Riis Andersen, Aki Vehtari, Ole Winther, Lars Kai Hansen
In this work, we address the problem of solving a series of underdetermined linear inverse problems subject to a sparsity constraint.
7 code implementations • 16 Jul 2015 • Aki Vehtari, Andrew Gelman, Jonah Gabry
Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values.
Computation Methodology
9 code implementations • 9 Jul 2015 • Aki Vehtari, Daniel Simpson, Andrew Gelman, Yuling Yao, Jonah Gabry
Importance weighting is a general way to adjust Monte Carlo integration to account for draws from the wrong distribution, but the resulting estimate can be highly variable when the importance ratios have a heavy right tail.
no code implementations • 30 Mar 2015 • Juho Piironen, Aki Vehtari
From a predictive viewpoint, best results are obtained by accounting for model uncertainty by forming the full encompassing model, such as the Bayesian model averaging solution over the candidate models.
no code implementations • 23 Dec 2014 • Aki Vehtari, Tommi Mononen, Ville Tolvanen, Tuomas Sivula, Ole Winther
The future predictive performance of a Bayesian model can be estimated using Bayesian cross-validation.
2 code implementations • 16 Dec 2014 • Aki Vehtari, Andrew Gelman, Tuomas Sivula, Pasi Jylänki, Dustin Tran, Swupnil Sahai, Paul Blomstedt, John P. Cunningham, David Schiminovich, Christian Robert
A common divide-and-conquer approach for Bayesian computation with big data is to partition the data, perform local inference for each piece separately, and combine the results to obtain a global posterior approximation.
no code implementations • 22 Apr 2014 • Ville Tolvanen, Pasi Jylänki, Aki Vehtari
This paper presents a novel approach for approximate integration over the uncertainty of noise and signal variances in Gaussian process (GP) regression.
no code implementations • 27 Mar 2013 • Pasi Jylänki, Aapo Nummenmaa, Aki Vehtari
Comparisons are made to two alternative models with ARD priors: a Gaussian process with a NN covariance function and marginal maximum a posteriori estimates of the relevance parameters, and a NN with Markov chain Monte Carlo integration over all the unknown model parameters.
1 code implementation • 1 Nov 2012 • Jaakko Riihimäki, Aki Vehtari
Logistic Gaussian process (LGP) priors provide a flexible alternative for modelling unknown densities.
1 code implementation • 25 Jun 2012 • Jarno Vanhatalo, Jaakko Riihimäki, Jouni Hartikainen, Pasi Jylänki, Ville Tolvanen, Aki Vehtari
The prior over functions is defined implicitly by the mean and covariance function, which determine the smoothness and variability of the function.
no code implementations • NeurIPS 2009 • Jarno Vanhatalo, Pasi Jylänki, Aki Vehtari
In this work, we discuss the properties of a Gaussian process regression model with the Student-t likelihood and utilize the Laplace approximation for approximate inference.