no code implementations • 6 Sep 2024 • Marvin Schmitt, Chengkun Li, Aki Vehtari, Luigi Acerbi, Paul-Christian Bürkner, Stefan T. Radev
Bayesian inference often faces a trade-off between computational speed and sampling accuracy.
no code implementations • 24 Jun 2024 • Trung Trinh, Markus Heinonen, Luigi Acerbi, Samuel Kaski
Deep neural networks (DNNs) excel on clean images but struggle with corrupted ones.
1 code implementation • 27 Jun 2023 • Gurjeet Sangra Singh, Luigi Acerbi
PyBADS is a Python implementation of the Bayesian Adaptive Direct Search (BADS) algorithm for fast and robust black-box optimization (Acerbi and Ma 2017).
1 code implementation • 5 Jun 2023 • Trung Trinh, Markus Heinonen, Luigi Acerbi, Samuel Kaski
To sidestep these difficulties, we propose First-order Repulsive Deep Ensemble (FoRDE), an ensemble learning method based on ParVI, which performs repulsion in the space of first-order input gradients.
1 code implementation • NeurIPS 2023 • Daolang Huang, Ayush Bharti, Amauri Souza, Luigi Acerbi, Samuel Kaski
Simulation-based inference (SBI) methods such as approximate Bayesian computation (ABC), synthetic likelihood, and neural posterior estimation (NPE) rely on simulating statistics to infer parameters of intractable likelihood models.
1 code implementation • 16 Mar 2023 • Bobby Huggins, Chengkun Li, Marlon Tobaben, Mikko J. Aarnos, Luigi Acerbi
PyVBMC is a Python implementation of the Variational Bayesian Monte Carlo (VBMC) algorithm for posterior and model inference for black-box computational models (Acerbi, 2018, 2020).
1 code implementation • 9 Mar 2023 • Chengkun Li, Grégoire Clarté, Martin Jørgensen, Luigi Acerbi
We propose the framework of post-process Bayesian inference as a means to obtain a quick posterior approximation from existing target density evaluations, with no further model calls.
1 code implementation • 3 Mar 2023 • Alexander Aushev, Aini Putkonen, Gregoire Clarte, Suyog Chandramouli, Luigi Acerbi, Samuel Kaski, Andrew Howes
In this paper, we propose BOSMOS: an approach to experimental design that can select between computational models without tractable likelihoods.
1 code implementation • 6 Jun 2022 • Trung Trinh, Markus Heinonen, Luigi Acerbi, Samuel Kaski
In this paper, we interpret these latent noise variables as implicit representations of simple and domain-agnostic data perturbations during training, producing BNNs that perform well under covariate shift due to input corruptions.
1 code implementation • 22 Feb 2022 • Daniel Augusto de Souza, Diego Mesquita, Samuel Kaski, Luigi Acerbi
While efficient, this framework is very sensitive to the quality of subposterior sampling.
1 code implementation • NeurIPS 2020 • Nisheet Patel, Luigi Acerbi, Alexandre Pouget
We derive from first principles an algorithm, Dynamic Resource Allocator (DRA), which we apply to two standard tasks in reinforcement learning and a model-based planning task, and find that it allocates more resources to items in memory that have a higher impact on cumulative rewards.
Neurons and Cognition
2 code implementations • NeurIPS 2020 • Luigi Acerbi
Variational Bayesian Monte Carlo (VBMC) is a recently introduced framework that uses Gaussian process surrogates to perform approximate Bayesian inference in models with black-box, non-cheap likelihoods.
2 code implementations • 12 Jan 2020 • Bas van Opheusden, Luigi Acerbi, Wei Ji Ma
We provide theoretical arguments in favor of IBS and an empirical assessment of the method for maximum-likelihood estimation with simulation-based models.
4 code implementations • NeurIPS 2018 • Luigi Acerbi
We introduce here a novel sample-efficient inference framework, Variational Bayesian Monte Carlo (VBMC).
4 code implementations • NeurIPS 2017 • Luigi Acerbi, Wei Ji Ma
Computational models in fields such as computational neuroscience are often evaluated via stochastic simulation or numerical approximation.
no code implementations • NeurIPS 2014 • Luigi Acerbi, Wei Ji Ma, Sethu Vijayakumar
Bayesian observer models are very effective in describing human performance in perceptual tasks, so much so that they are trusted to faithfully recover hidden mental representations of priors, likelihoods, or loss functions from the data.