1 code implementation • 8 Jul 2022 • Tom Huix, Szymon Majewski, Alain Durmus, Eric Moulines, Anna Korba
This paper studies the Variational Inference (VI) used for training Bayesian Neural Networks (BNN) in the overparameterized regime, i. e., when the number of neurons tends to infinity.
2 code implementations • 20 May 2021 • Anna Korba, Pierre-Cyril Aubin-Frankowski, Szymon Majewski, Pierre Ablin
We investigate the properties of its Wasserstein gradient flow to approximate a target probability distribution $\pi$ on $\mathbb{R}^d$, known up to a normalization constant.
2 code implementations • 5 Jan 2021 • Kilian Fatras, Younes Zine, Szymon Majewski, Rémi Flamary, Rémi Gribonval, Nicolas Courty
We notably argue that the minibatch strategy comes with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with limits: the minibatch OT is not a distance.
1 code implementation • 21 Jun 2018 • Antoine Liutkus, Umut Şimşekli, Szymon Majewski, Alain Durmus, Fabian-Robert Stöter
To the best of our knowledge, the proposed algorithm is the first nonparametric IGM algorithm with explicit theoretical guarantees.
no code implementations • 26 Feb 2018 • Alain Durmus, Szymon Majewski, Błażej Miasojedow
In this paper, we provide new insights on the Unadjusted Langevin Algorithm.