1 code implementation • 29 Aug 2022 • Arnaud Delaunoy, Joeri Hermans, François Rozet, Antoine Wehenkel, Gilles Louppe
In this work, we introduce Balanced Neural Ratio Estimation (BNRE), a variation of the NRE algorithm designed to produce posterior approximations that tend to be more conservative, hence improving their reliability, while sharing the same Bayes optimal solution.
4 code implementations • 13 Oct 2021 • Joeri Hermans, Arnaud Delaunoy, François Rozet, Antoine Wehenkel, Volodimir Begy, Gilles Louppe
We present extensive empirical evidence showing that current Bayesian simulation-based inference algorithms can produce computationally unfaithful posterior approximations.
1 code implementation • 30 Nov 2020 • Joeri Hermans, Nilanjan Banik, Christoph Weniger, Gianfranco Bertone, Gilles Louppe
A statistical analysis of the observed perturbations in the density of stellar streams can in principle set stringent contraints on the mass function of dark matter subhaloes, which in turn can be used to constrain the mass of the dark matter particle.
3 code implementations • 4 Sep 2019 • Johann Brehmer, Siddharth Mishra-Sharma, Joeri Hermans, Gilles Louppe, Kyle Cranmer
The subtle and unique imprint of dark matter substructure on extended arcs in strong lensing systems contains a wealth of information about the properties and distribution of dark matter on small scales and, consequently, about the underlying particle physics.
5 code implementations • ICML 2020 • Joeri Hermans, Volodimir Begy, Gilles Louppe
This work introduces a novel approach to address the intractability of the likelihood and the marginal model.
2 code implementations • 22 May 2018 • Joeri Hermans, Gilles Louppe
Distributed asynchronous SGD has become widely used for deep learning in large-scale systems, but remains notorious for its instability when increasing the number of workers.
1 code implementation • 6 Oct 2017 • Joeri Hermans, Gerasimos Spanakis, Rico Möckel
This work addresses the instability in asynchronous data parallel optimization.
2 code implementations • 22 Jul 2017 • Gilles Louppe, Joeri Hermans, Kyle Cranmer
We adapt the training procedure of generative adversarial networks by replacing the differentiable generative network with a domain-specific simulator.