no code implementations • 23 Dec 2024 • Laura Manduchi, Antoine Wehenkel, Jens Behrmann, Luca Pegolotti, Andy C. Miller, Ozan Sener, Marco Cuturi, Guillermo Sapiro, Jörn-Henrik Jacobsen
Whole-body hemodynamics simulators, which model blood flow and pressure waveforms as functions of physiological parameters, are now essential tools for studying cardiovascular systems.
no code implementations • 14 May 2024 • Antoine Wehenkel, Juan L. Gamella, Ozan Sener, Jens Behrmann, Guillermo Sapiro, Marco Cuturi, Jörn-Henrik Jacobsen
Driven by steady progress in generative modeling, simulation-based inference (SBI) has enabled inference over stochastic simulators.
1 code implementation • NeurIPS 2023 • Maciej Falkiewicz, Naoya Takeishi, Imahn Shekhzadeh, Antoine Wehenkel, Arnaud Delaunoy, Gilles Louppe, Alexandros Kalousis
Bayesian inference allows expressing the uncertainty of posterior belief under a probabilistic model given prior information and the likelihood of the evidence.
no code implementations • 26 Jul 2023 • Antoine Wehenkel, Laura Manduchi, Jens Behrmann, Luca Pegolotti, Andrew C. Miller, Guillermo Sapiro, Ozan Sener, Marco Cuturi, Jörn-Henrik Jacobsen
Over the past decades, hemodynamics simulators have steadily evolved and have become tools of choice for studying cardiovascular systems in-silico.
1 code implementation • 29 Aug 2022 • Arnaud Delaunoy, Joeri Hermans, François Rozet, Antoine Wehenkel, Gilles Louppe
In this work, we introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative, hence improving their reliability, while sharing the same Bayes optimal solution.
1 code implementation • 8 Feb 2022 • Antoine Wehenkel, Jens Behrmann, Hsiang Hsu, Guillermo Sapiro, Gilles Louppe, Jörn-Henrik Jacobsen
Hybrid modelling reduces the misspecification of expert models by combining them with machine learning (ML) components learned from data.
4 code implementations • 13 Oct 2021 • Joeri Hermans, Arnaud Delaunoy, François Rozet, Antoine Wehenkel, Volodimir Begy, Gilles Louppe
We present extensive empirical evidence showing that current Bayesian simulation-based inference algorithms can produce computationally unfaithful posterior approximations.
no code implementations • ICML Workshop INNF 2021 • Antoine Wehenkel, Gilles Louppe
Among likelihood-based approaches for deep generative modelling, variational autoencoders (VAEs) offer scalable amortized posterior inference and fast sampling.
1 code implementation • 6 Jun 2021 • Thibaut Théate, Antoine Wehenkel, Adrien Bolland, Gilles Louppe, Damien Ernst
The results highlight the main strengths and weaknesses associated with each probability metric together with an important limitation of the Wasserstein distance.
Distributional Reinforcement Learning
reinforcement-learning
+3
2 code implementations • 28 May 2021 • Jonathan Dumas, Colin Cointe, Antoine Wehenkel, Antonio Sutera, Xavier Fettweis, Bertrand Cornélusse
This paper addresses the energy management of a grid-connected renewable generation plant coupled with a battery energy storage device in the capacity firming market, designed to promote renewable power generation facilities in small non-interconnected grids.
1 code implementation • 11 Nov 2020 • Maxime Vandegar, Michael Kagan, Antoine Wehenkel, Gilles Louppe
We revisit empirical Bayes in the absence of a tractable likelihood function, as is typical in scientific domains relying on computer simulations.
no code implementations • 24 Oct 2020 • Arnaud Delaunoy, Antoine Wehenkel, Tanja Hinderer, Samaya Nissanke, Christoph Weniger, Andrew R. Williamson, Gilles Louppe
Gravitational waves from compact binaries measured by the LIGO and Virgo detectors are routinely analyzed using Markov Chain Monte Carlo sampling algorithms.
4 code implementations • 3 Jun 2020 • Antoine Wehenkel, Gilles Louppe
From this new perspective, we propose the graphical normalizing flow, a new invertible transformation with either a prescribed or a learnable graphical structure.
no code implementations • 1 Jun 2020 • Antoine Wehenkel, Gilles Louppe
Normalizing flows have emerged as an important family of deep neural networks for modelling complex probability distributions.
2 code implementations • NeurIPS 2019 • Antoine Wehenkel, Gilles Louppe
Monotonic neural networks have recently been proposed as a way to define invertible transformations.
1 code implementation • 21 Dec 2018 • Nicolas Vecoven, Damien Ernst, Antoine Wehenkel, Guillaume Drion
Animals excel at adapting their intentions, attention, and actions to the environment, making them remarkably efficient at interacting with a rich, unpredictable and ever-changing external world, a property that intelligent machines currently lack.
1 code implementation • 30 Nov 2018 • Arthur Pesah, Antoine Wehenkel, Gilles Louppe
Likelihood-free inference is concerned with the estimation of the parameters of a non-differentiable stochastic simulator that best reproduce real observations.