Search Results for author: George Deligiannidis

Found 27 papers, 15 papers with code

Nearly $d$-Linear Convergence Bounds for Diffusion Models via Stochastic Localization

no code implementations7 Aug 2023 Joe Benton, Valentin De Bortoli, Arnaud Doucet, George Deligiannidis

We provide the first convergence bounds which are linear in the data dimension (up to logarithmic factors) assuming only finite second moments of the data distribution.

Denoising

On the Expected Size of Conformal Prediction Sets

1 code implementation12 Jun 2023 Guneet S. Dhillon, George Deligiannidis, Tom Rainforth

While conformal predictors reap the benefits of rigorous statistical guarantees on their error frequency, the size of their corresponding prediction sets is critical to their practical utility.

Conformal Prediction

A Unified Framework for U-Net Design and Analysis

1 code implementation NeurIPS 2023 Christopher Williams, Fabian Falck, George Deligiannidis, Chris Holmes, Arnaud Doucet, Saifuddin Syed

U-Nets are a go-to, state-of-the-art neural architecture across numerous tasks for continuous signals on a square such as images and Partial Differential Equations (PDE), however their design and architecture is understudied.

Image Segmentation Semantic Segmentation

Error Bounds for Flow Matching Methods

no code implementations26 May 2023 Joe Benton, George Deligiannidis, Arnaud Doucet

Previous work derived bounds on the approximation error of diffusion models under the stochastic sampling regime, given assumptions on the $L^2$ loss.

Denoising

Generalization Bounds with Data-dependent Fractal Dimensions

1 code implementation6 Feb 2023 Benjamin Dupuis, George Deligiannidis, Umut Şimşekli

To achieve this goal, we build up on a classical covering argument in learning theory and introduce a data-dependent fractal dimension.

Generalization Bounds Learning Theory +1

A Multi-Resolution Framework for U-Nets with Applications to Hierarchical VAEs

no code implementations19 Jan 2023 Fabian Falck, Christopher Williams, Dominic Danks, George Deligiannidis, Christopher Yau, Chris Holmes, Arnaud Doucet, Matthew Willetts

U-Net architectures are ubiquitous in state-of-the-art deep learning, however their regularisation properties and relationship to wavelets are understudied.

From Denoising Diffusions to Denoising Markov Models

1 code implementation7 Nov 2022 Joe Benton, Yuyang Shi, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet

We propose a unifying framework generalising this approach to a wide class of spaces and leading to an original extension of score matching.

Denoising

Generalisation under gradient descent via deterministic PAC-Bayes

no code implementations6 Sep 2022 Eugenio Clerico, Tyler Farghly, George Deligiannidis, Benjamin Guedj, Arnaud Doucet

We establish disintegrated PAC-Bayesian generalisation bounds for models trained with gradient descent methods or continuous gradient flows.

A Continuous Time Framework for Discrete Denoising Models

1 code implementation30 May 2022 Andrew Campbell, Joe Benton, Valentin De Bortoli, Tom Rainforth, George Deligiannidis, Arnaud Doucet

We provide the first complete continuous time framework for denoising diffusion models of discrete data.

Denoising

Chained Generalisation Bounds

no code implementations2 Mar 2022 Eugenio Clerico, Amitis Shidani, George Deligiannidis, Arnaud Doucet

This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique.

Neural Score Matching for High-Dimensional Causal Inference

1 code implementation1 Mar 2022 Oscar Clivio, Fabian Falck, Brieuc Lehmann, George Deligiannidis, Chris Holmes

We leverage these balancing scores to perform matching for high-dimensional causal inference and call this procedure neural score matching.

Causal Inference Vocal Bursts Intensity Prediction

On Mixing Times of Metropolized Algorithm With Optimization Step (MAO) : A New Framework

no code implementations1 Dec 2021 EL Mahdi Khribch, George Deligiannidis, Daniel Paulin

In this paper, we consider sampling from a class of distributions with thin tails supported on $\mathbb{R}^d$ and make two primary contributions.

Conditionally Gaussian PAC-Bayes

1 code implementation22 Oct 2021 Eugenio Clerico, George Deligiannidis, Arnaud Doucet

Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent.

Wide stochastic networks: Gaussian limit and PAC-Bayesian training

1 code implementation17 Jun 2021 Eugenio Clerico, George Deligiannidis, Arnaud Doucet

The limit of infinite width allows for substantial simplifications in the analytical study of over-parameterised neural networks.

Fractal Structure and Generalization Properties of Stochastic Optimization Algorithms

no code implementations NeurIPS 2021 Alexander Camuto, George Deligiannidis, Murat A. Erdogdu, Mert Gürbüzbalaban, Umut Şimşekli, Lingjiong Zhu

As our main contribution, we prove that the generalization error of a stochastic optimization algorithm can be bounded based on the `complexity' of the fractal structure that underlies its invariant measure.

Generalization Bounds Learning Theory +1

Differentiable Particle Filtering via Entropy-Regularized Optimal Transport

1 code implementation15 Feb 2021 Adrien Corenflos, James Thornton, George Deligiannidis, Arnaud Doucet

Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models.

Variational Inference

Stable ResNet

no code implementations24 Oct 2020 Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, Judith Rousseau

Deep ResNet architectures have achieved state of the art performance on many tasks.

Hausdorff Dimension, Heavy Tails, and Generalization in Neural Networks

1 code implementation NeurIPS 2020 Umut Şimşekli, Ozan Sener, George Deligiannidis, Murat A. Erdogdu

Despite its success in a wide range of applications, characterizing the generalization properties of stochastic gradient descent (SGD) in non-convex deep learning problems is still an important challenge.

Generalization Bounds

Localised Generative Flows

no code implementations25 Sep 2019 Rob Cornish, Anthony Caterini, George Deligiannidis, Arnaud Doucet

We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem.

Density Estimation Normalising Flows

Bernoulli Race Particle Filters

no code implementations3 Mar 2019 Sebastian M. Schmon, Arnaud Doucet, George Deligiannidis

When the weights in a particle filter are not available analytically, standard resampling methods cannot be employed.

valid

Unbiased Smoothing using Particle Independent Metropolis-Hastings

no code implementations5 Feb 2019 Lawrence Middleton, George Deligiannidis, Arnaud Doucet, Pierre E. Jacob

We consider the approximation of expectations with respect to the distribution of a latent Markov process given noisy measurements.

Scalable Metropolis-Hastings for Exact Bayesian Inference with Large Datasets

1 code implementation28 Jan 2019 Robert Cornish, Paul Vanetti, Alexandre Bouchard-Côté, George Deligiannidis, Arnaud Doucet

Bayesian inference via standard Markov Chain Monte Carlo (MCMC) methods is too computationally intensive to handle large datasets, since the cost per step usually scales like $\Theta(n)$ in the number of data points $n$.

Bayesian Inference

Cannot find the paper you are looking for? You can Submit a new open access paper.