Search Results for author: Maxime Haddouche

Found 9 papers, 0 papers with code

A PAC-Bayesian Link Between Generalisation and Flat Minima

no code implementations13 Feb 2024 Maxime Haddouche, Paul Viallard, Umut Simsekli, Benjamin Guedj

Modern machine learning usually involves predictors in the overparametrised setting (number of trained parameters greater than dataset size), and their training yield not only good performances on training data, but also good generalisation capacity.

Tighter Generalisation Bounds via Interpolation

no code implementations7 Feb 2024 Paul Viallard, Maxime Haddouche, Umut Şimşekli, Benjamin Guedj

We also instantiate our bounds as training objectives, yielding non-trivial guarantees and practical performances.

Federated Learning with Nonvacuous Generalisation Bounds

no code implementations17 Oct 2023 Pierre Jobic, Maxime Haddouche, Benjamin Guedj

We introduce a novel strategy to train randomised predictors in federated learning, where each node of the network aims at preserving its privacy by releasing a local predictor but keeping secret its training dataset with respect to the other nodes.

Federated Learning

Wasserstein PAC-Bayes Learning: Exploiting Optimisation Guarantees to Explain Generalisation

no code implementations14 Apr 2023 Maxime Haddouche, Benjamin Guedj

PAC-Bayes learning is an established framework to both assess the generalisation ability of learning algorithms, and design new learning algorithm by exploiting generalisation bounds as training objectives.

Optimistically Tempered Online Learning

no code implementations18 Jan 2023 Maxime Haddouche, Olivier Wintenberger, Benjamin Guedj

Optimistic Online Learning algorithms have been developed to exploit expert advices, assumed optimistically to be always useful.

PAC-Bayes Generalisation Bounds for Heavy-Tailed Losses through Supermartingales

no code implementations3 Oct 2022 Maxime Haddouche, Benjamin Guedj

While PAC-Bayes is now an established learning framework for light-tailed losses (\emph{e. g.}, subgaussian or subexponential), its extension to the case of heavy-tailed losses remains largely uncharted and has attracted a growing interest in recent years.

Online PAC-Bayes Learning

no code implementations31 May 2022 Maxime Haddouche, Benjamin Guedj

Most PAC-Bayesian bounds hold in the batch learning setting where data is collected at once, prior to inference or prediction.

Upper and Lower Bounds on the Performance of Kernel PCA

no code implementations18 Dec 2020 Maxime Haddouche, Benjamin Guedj, John Shawe-Taylor

Principal Component Analysis (PCA) is a popular method for dimension reduction and has attracted an unfailing interest for decades.

Dimensionality Reduction

PAC-Bayes unleashed: generalisation bounds with unbounded losses

no code implementations12 Jun 2020 Maxime Haddouche, Benjamin Guedj, Omar Rivasplata, John Shawe-Taylor

We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions.

regression

Cannot find the paper you are looking for? You can Submit a new open access paper.