Search Results for author: George Deligiannidis

Found 16 papers, 7 papers with code

Chained Generalisation Bounds

no code implementations2 Mar 2022 Eugenio Clerico, Amitis Shidani, George Deligiannidis, Arnaud Doucet

This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique.

Neural Score Matching for High-Dimensional Causal Inference

1 code implementation1 Mar 2022 Oscar Clivio, Fabian Falck, Brieuc Lehmann, George Deligiannidis, Chris Holmes

We leverage these balancing scores to perform matching for high-dimensional causal inference and call this procedure neural score matching.

Causal Inference

On Mixing Times of Metropolized Algorithm With Optimization Step (MAO) : A New Framework

no code implementations1 Dec 2021 EL Mahdi Khribch, George Deligiannidis, Daniel Paulin

In this paper, we consider sampling from a class of distributions with thin tails supported on $\mathbb{R}^d$ and make two primary contributions.

Conditionally Gaussian PAC-Bayes

1 code implementation22 Oct 2021 Eugenio Clerico, George Deligiannidis, Arnaud Doucet

Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent.

Wide stochastic networks: Gaussian limit and PAC-Bayesian training

1 code implementation17 Jun 2021 Eugenio Clerico, George Deligiannidis, Arnaud Doucet

The limit of infinite width allows for substantial simplifications in the analytical study of overparameterized neural networks.

Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms

no code implementations NeurIPS 2021 Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gürbüzbalaban, Umut Şimşekli, Lingjiong Zhu

As our main contribution, we prove that the generalization error of a stochastic optimization algorithm can be bounded based on the `complexity' of the fractal structure that underlies its invariant measure.

Generalization Bounds Learning Theory +1

Differentiable Particle Filtering via Entropy-Regularized Optimal Transport

1 code implementation15 Feb 2021 Adrien Corenflos, James Thornton, George Deligiannidis, Arnaud Doucet

Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models.

Variational Inference

Stable ResNet

no code implementations24 Oct 2020 Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, Judith Rousseau

Deep ResNet architectures have achieved state of the art performance on many tasks.

Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks

1 code implementation NeurIPS 2020 Umut Şimşekli, Ozan Sener, George Deligiannidis, Murat A. Erdogdu

Despite its success in a wide range of applications, characterizing the generalization properties of stochastic gradient descent (SGD) in non-convex deep learning problems is still an important challenge.

Generalization Bounds

Localised Generative Flows

no code implementations25 Sep 2019 Rob Cornish, Anthony Caterini, George Deligiannidis, Arnaud Doucet

We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem.

Density Estimation Normalising Flows

Bernoulli Race Particle Filters

no code implementations3 Mar 2019 Sebastian M. Schmon, Arnaud Doucet, George Deligiannidis

When the weights in a particle filter are not available analytically, standard resampling methods cannot be employed.

Unbiased Smoothing using Particle Independent Metropolis-Hastings

no code implementations5 Feb 2019 Lawrence Middleton, George Deligiannidis, Arnaud Doucet, Pierre E. Jacob

We consider the approximation of expectations with respect to the distribution of a latent Markov process given noisy measurements.

Scalable Metropolis-Hastings for Exact Bayesian Inference with Large Datasets

1 code implementation28 Jan 2019 Robert Cornish, Paul Vanetti, Alexandre Bouchard-Côté, George Deligiannidis, Arnaud Doucet

Bayesian inference via standard Markov Chain Monte Carlo (MCMC) methods is too computationally intensive to handle large datasets, since the cost per step usually scales like $\Theta(n)$ in the number of data points $n$.

Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.