Search Results for author: Felix Biggs

Found 7 papers, 3 papers with code

MMD-FUSE: Learning and Combining Kernels for Two-Sample Testing Without Data Splitting

1 code implementation NeurIPS 2023 Felix Biggs, Antonin Schrab, Arthur Gretton

We propose novel statistics which maximise the power of a two-sample test based on the Maximum Mean Discrepancy (MMD), by adapting over the set of kernels used in defining it.

Two-sample testing

Tighter PAC-Bayes Generalisation Bounds by Leveraging Example Difficulty

no code implementations20 Oct 2022 Felix Biggs, Benjamin Guedj

We introduce a modified version of the excess risk, which can be used to obtain tighter, fast-rate PAC-Bayesian generalisation bounds.

A Note on the Efficient Evaluation of PAC-Bayes Bounds

no code implementations12 Sep 2022 Felix Biggs

When utilising PAC-Bayes theory for risk certification, it is usually necessary to estimate and bound the Gibbs risk of the PAC-Bayes posterior.

On Margins and Generalisation for Voting Classifiers

1 code implementation9 Jun 2022 Felix Biggs, Valentina Zantedeschi, Benjamin Guedj

We study the generalisation properties of majority voting on finite ensembles of classifiers, proving margin-based generalisation bounds via the PAC-Bayes theory.

Non-Vacuous Generalisation Bounds for Shallow Neural Networks

1 code implementation3 Feb 2022 Felix Biggs, Benjamin Guedj

We focus on a specific class of shallow neural networks with a single hidden layer, namely those with $L_2$-normalised data and either a sigmoid-shaped Gaussian error function ("erf") activation or a Gaussian Error Linear Unit (GELU) activation.

On Margins and Derandomisation in PAC-Bayes

no code implementations8 Jul 2021 Felix Biggs, Benjamin Guedj

We give a general recipe for derandomising PAC-Bayesian bounds using margins, with the critical ingredient being that our randomised predictions concentrate around some value.

Differentiable PAC-Bayes Objectives with Partially Aggregated Neural Networks

no code implementations22 Jun 2020 Felix Biggs, Benjamin Guedj

We make three related contributions motivated by the challenge of training stochastic neural networks, particularly in a PAC-Bayesian setting: (1) we show how averaging over an ensemble of stochastic neural networks enables a new class of \emph{partially-aggregated} estimators; (2) we show that these lead to provably lower-variance gradient estimates for non-differentiable signed-output networks; (3) we reformulate a PAC-Bayesian bound for these networks to derive a directly optimisable, differentiable objective and a generalisation guarantee, without using a surrogate loss or loosening the bound.

Cannot find the paper you are looking for? You can Submit a new open access paper.