no code implementations • 13 Feb 2024 • Maxime Haddouche, Paul Viallard, Umut Simsekli, Benjamin Guedj
Modern machine learning usually involves predictors in the overparametrised setting (number of trained parameters greater than dataset size), and their training yield not only good performances on training data, but also good generalisation capacity.
no code implementations • 7 Feb 2024 • Paul Viallard, Maxime Haddouche, Umut Şimşekli, Benjamin Guedj
We also instantiate our bounds as training objectives, yielding non-trivial guarantees and practical performances.
no code implementations • 17 Oct 2023 • Pierre Jobic, Maxime Haddouche, Benjamin Guedj
We introduce a novel strategy to train randomised predictors in federated learning, where each node of the network aims at preserving its privacy by releasing a local predictor but keeping secret its training dataset with respect to the other nodes.
no code implementations • 14 Apr 2023 • Maxime Haddouche, Benjamin Guedj
PAC-Bayes learning is an established framework to both assess the generalisation ability of learning algorithms, and design new learning algorithm by exploiting generalisation bounds as training objectives.
no code implementations • 18 Jan 2023 • Maxime Haddouche, Olivier Wintenberger, Benjamin Guedj
Optimistic Online Learning algorithms have been developed to exploit expert advices, assumed optimistically to be always useful.
no code implementations • 3 Oct 2022 • Maxime Haddouche, Benjamin Guedj
While PAC-Bayes is now an established learning framework for light-tailed losses (\emph{e. g.}, subgaussian or subexponential), its extension to the case of heavy-tailed losses remains largely uncharted and has attracted a growing interest in recent years.
no code implementations • 31 May 2022 • Maxime Haddouche, Benjamin Guedj
Most PAC-Bayesian bounds hold in the batch learning setting where data is collected at once, prior to inference or prediction.
no code implementations • 18 Dec 2020 • Maxime Haddouche, Benjamin Guedj, John Shawe-Taylor
Principal Component Analysis (PCA) is a popular method for dimension reduction and has attracted an unfailing interest for decades.
no code implementations • 12 Jun 2020 • Maxime Haddouche, Benjamin Guedj, Omar Rivasplata, John Shawe-Taylor
We present new PAC-Bayesian generalisation bounds for learning problems with unbounded loss functions.