no code implementations • 26 Apr 2024 • Benjamin Dupuis, Paul Viallard, George Deligiannidis, Umut Simsekli
We propose data-dependent uniform generalization bounds by approaching the problem from a PAC-Bayesian perspective.
1 code implementation • 19 Feb 2024 • Paul Viallard, Rémi Emonet, Amaury Habrard, Emilie Morvant, Valentina Zantedeschi
In statistical learning theory, a generalization bound usually involves a complexity measure imposed by the considered theoretical framework.
no code implementations • 13 Feb 2024 • Maxime Haddouche, Paul Viallard, Umut Simsekli, Benjamin Guedj
Modern machine learning usually involves predictors in the overparametrised setting (number of trained parameters greater than dataset size), and their training yield not only good performances on training data, but also good generalisation capacity.
no code implementations • 7 Feb 2024 • Paul Viallard, Maxime Haddouche, Umut Şimşekli, Benjamin Guedj
We also instantiate our bounds as training objectives, yielding non-trivial guarantees and practical performances.
no code implementations • 1 Dec 2023 • Benjamin Dupuis, Paul Viallard
This has been successfully applied to generalization theory by exploiting the fractal properties of those dynamics.
1 code implementation • NeurIPS 2021 • Valentina Zantedeschi, Paul Viallard, Emilie Morvant, Rémi Emonet, Amaury Habrard, Pascal Germain, Benjamin Guedj
We investigate a stochastic counterpart of majority votes over finite ensembles of classifiers, and study its generalization properties.
1 code implementation • 28 Apr 2021 • Paul Viallard, Pascal Germain, Amaury Habrard, Emilie Morvant
In the PAC-Bayesian literature, the C-Bound refers to an insightful relation between the risk of a majority vote classifier (under the zero-one loss) and the first two moments of its margin (i. e., the expected margin and the voters' diversity).
1 code implementation • NeurIPS 2021 • Paul Viallard, Guillaume Vidot, Amaury Habrard, Emilie Morvant
We propose the first general PAC-Bayesian generalization bounds for adversarial robustness, that estimate, at test time, how much a model will be invariant to imperceptible perturbations in the input.
1 code implementation • 17 Feb 2021 • Paul Viallard, Pascal Germain, Amaury Habrard, Emilie Morvant
PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers.