no code implementations • 10 Jun 2024 • Arnaud Descours, Tom Huix, Arnaud Guillin, Manon Michel, Éric Moulines, Boris Nectoux
In this paper, we rigorously derive Central Limit Theorems (CLT) for Bayesian two-layerneural networks in the infinite-width limit and trained by variational inference on a regression task.
no code implementations • 6 Jun 2024 • Tom Huix, Anna Korba, Alain Durmus, Eric Moulines
In this view, VI over this specific family can be casted as the minimization of a Mollified relative entropy, i. e. the KL between the convolution (with respect to a Gaussian kernel) of an atomic measure supported on Diracs, and the target distribution.
1 code implementation • 19 Jul 2023 • Pierre Clavier, Tom Huix, Alain Durmus
In this paper, we introduce and analyze a variant of the Thompson sampling (TS) algorithm for contextual bandits.
no code implementations • 10 Jul 2023 • Arnaud Descours, Tom Huix, Arnaud Guillin, Manon Michel, Éric Moulines, Boris Nectoux
We provide a rigorous analysis of training by variational inference (VI) of Bayesian neural networks in the two-layer and infinite-width case.
1 code implementation • 8 Jul 2022 • Tom Huix, Szymon Majewski, Alain Durmus, Eric Moulines, Anna Korba
This paper studies the Variational Inference (VI) used for training Bayesian Neural Networks (BNN) in the overparameterized regime, i. e., when the number of neurons tends to infinity.