no code implementations • 2 Mar 2022 • Eugenio Clerico, Amitis Shidani, George Deligiannidis, Arnaud Doucet
This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique.
1 code implementation • 1 Mar 2022 • Oscar Clivio, Fabian Falck, Brieuc Lehmann, George Deligiannidis, Chris Holmes
We leverage these balancing scores to perform matching for high-dimensional causal inference and call this procedure neural score matching.
no code implementations • 27 Feb 2022 • Yuyang Shi, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
Denoising diffusion models have recently emerged as a powerful class of generative models.
no code implementations • 1 Dec 2021 • EL Mahdi Khribch, George Deligiannidis, Daniel Paulin
In this paper, we consider sampling from a class of distributions with thin tails supported on $\mathbb{R}^d$ and make two primary contributions.
1 code implementation • 22 Oct 2021 • Eugenio Clerico, George Deligiannidis, Arnaud Doucet
Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent.
no code implementations • 18 Aug 2021 • George Deligiannidis, Valentin De Bortoli, Arnaud Doucet
We establish the uniform in time stability, w. r. t.
1 code implementation • 17 Jun 2021 • Eugenio Clerico, George Deligiannidis, Arnaud Doucet
The limit of infinite width allows for substantial simplifications in the analytical study of overparameterized neural networks.
no code implementations • NeurIPS 2021 • Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gürbüzbalaban, Umut Şimşekli, Lingjiong Zhu
As our main contribution, we prove that the generalization error of a stochastic optimization algorithm can be bounded based on the `complexity' of the fractal structure that underlies its invariant measure.
1 code implementation • 15 Feb 2021 • Adrien Corenflos, James Thornton, George Deligiannidis, Arnaud Doucet
Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models.
no code implementations • 24 Oct 2020 • Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, Judith Rousseau
Deep ResNet architectures have achieved state of the art performance on many tasks.
1 code implementation • NeurIPS 2020 • Umut Şimşekli, Ozan Sener, George Deligiannidis, Murat A. Erdogdu
Despite its success in a wide range of applications, characterizing the generalization properties of stochastic gradient descent (SGD) in non-convex deep learning problems is still an important challenge.
3 code implementations • ICML 2020 • Rob Cornish, Anthony L. Caterini, George Deligiannidis, Arnaud Doucet
We show that normalising flows become pathological when used to model targets whose supports have complicated topologies.
no code implementations • 25 Sep 2019 • Rob Cornish, Anthony Caterini, George Deligiannidis, Arnaud Doucet
We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem.
no code implementations • 3 Mar 2019 • Sebastian M. Schmon, Arnaud Doucet, George Deligiannidis
When the weights in a particle filter are not available analytically, standard resampling methods cannot be employed.
no code implementations • 5 Feb 2019 • Lawrence Middleton, George Deligiannidis, Arnaud Doucet, Pierre E. Jacob
We consider the approximation of expectations with respect to the distribution of a latent Markov process given noisy measurements.
1 code implementation • 28 Jan 2019 • Robert Cornish, Paul Vanetti, Alexandre Bouchard-Côté, George Deligiannidis, Arnaud Doucet
Bayesian inference via standard Markov Chain Monte Carlo (MCMC) methods is too computationally intensive to handle large datasets, since the cost per step usually scales like $\Theta(n)$ in the number of data points $n$.