Search Results for author: Paul Viallard

Found 9 papers, 5 papers with code

Leveraging PAC-Bayes Theory and Gibbs Distributions for Generalization Bounds with Complexity Measures

1 code implementation19 Feb 2024 Paul Viallard, Rémi Emonet, Amaury Habrard, Emilie Morvant, Valentina Zantedeschi

In statistical learning theory, a generalization bound usually involves a complexity measure imposed by the considered theoretical framework.

Generalization Bounds Learning Theory

A PAC-Bayesian Link Between Generalisation and Flat Minima

no code implementations13 Feb 2024 Maxime Haddouche, Paul Viallard, Umut Simsekli, Benjamin Guedj

Modern machine learning usually involves predictors in the overparametrised setting (number of trained parameters greater than dataset size), and their training yield not only good performances on training data, but also good generalisation capacity.

Tighter Generalisation Bounds via Interpolation

no code implementations7 Feb 2024 Paul Viallard, Maxime Haddouche, Umut Şimşekli, Benjamin Guedj

We also instantiate our bounds as training objectives, yielding non-trivial guarantees and practical performances.

From Mutual Information to Expected Dynamics: New Generalization Bounds for Heavy-Tailed SGD

no code implementations1 Dec 2023 Benjamin Dupuis, Paul Viallard

This has been successfully applied to generalization theory by exploiting the fractal properties of those dynamics.

Generalization Bounds

Self-Bounding Majority Vote Learning Algorithms by the Direct Minimization of a Tight PAC-Bayesian C-Bound

1 code implementation28 Apr 2021 Paul Viallard, Pascal Germain, Amaury Habrard, Emilie Morvant

In the PAC-Bayesian literature, the C-Bound refers to an insightful relation between the risk of a majority vote classifier (under the zero-one loss) and the first two moments of its margin (i. e., the expected margin and the voters' diversity).

Generalization Bounds

A PAC-Bayes Analysis of Adversarial Robustness

1 code implementation NeurIPS 2021 Paul Viallard, Guillaume Vidot, Amaury Habrard, Emilie Morvant

We propose the first general PAC-Bayesian generalization bounds for adversarial robustness, that estimate, at test time, how much a model will be invariant to imperceptible perturbations in the input.

Adversarial Robustness Generalization Bounds +1

A General Framework for the Practical Disintegration of PAC-Bayesian Bounds

1 code implementation17 Feb 2021 Paul Viallard, Pascal Germain, Amaury Habrard, Emilie Morvant

PAC-Bayesian bounds are known to be tight and informative when studying the generalization ability of randomized classifiers.

Generalization Bounds

Cannot find the paper you are looking for? You can Submit a new open access paper.