Search Results for author: Arnaud Doucet

Found 61 papers, 30 papers with code

Chained Generalisation Bounds

no code implementations2 Mar 2022 Eugenio Clerico, Amitis Shidani, George Deligiannidis, Arnaud Doucet

This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique.

On PAC-Bayesian reconstruction guarantees for VAEs

no code implementations23 Feb 2022 Badr-Eddine Chérief-Abdellatif, Yuyang Shi, Arnaud Doucet, Benjamin Guedj

Despite its wide use and empirical successes, the theoretical understanding and study of the behaviour and performance of the variational autoencoder (VAE) have only emerged in the past few years.

Riemannian Score-Based Generative Modeling

no code implementations6 Feb 2022 Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet

To overcome this issue, we introduce Riemannian Score-based Generative Models (RSGMs) which extend current SGMs to the setting of compact Riemannian manifolds.


Importance Weighting Approach in Kernel Bayes' Rule

no code implementations5 Feb 2022 Liyuan Xu, Yutian Chen, Arnaud Doucet, Arthur Gretton

We study a nonparametric approach to Bayesian computation via feature means, where the expectation of prior features is updated to yield expected posterior features, based on regression from kernel or neural net features of the observations.

Continual Repeated Annealed Flow Transport Monte Carlo

1 code implementation31 Jan 2022 Alexander G. D. G. Matthews, Michael Arbel, Danilo J. Rezende, Arnaud Doucet

We propose Continual Repeated Annealed Flow Transport Monte Carlo (CRAFT), a method that combines a sequential Monte Carlo (SMC) sampler (itself a generalization of Annealed Importance Sampling) with variational inference using normalizing flows.

Variational Inference

COIN++: Data Agnostic Neural Compression

no code implementations30 Jan 2022 Emilien Dupont, Hrushikesh Loya, Milad Alizadeh, Adam Goliński, Yee Whye Teh, Arnaud Doucet

Neural compression algorithms are typically based on autoencoders that require specialized encoder and decoder architectures for different data modalities.

NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform

1 code implementation NeurIPS 2021 Achille Thin, Yazid Janati El Idrissi, Sylvain Le Corff, Charles Ollion, Eric Moulines, Arnaud Doucet, Alain Durmus, Christian Robert

Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant $\mathrm{Z}$ are challenging problems.

Simulating Diffusion Bridges with Score Matching

1 code implementation14 Nov 2021 Valentin De Bortoli, Arnaud Doucet, Jeremy Heng, James Thornton

We consider the problem of simulating diffusion bridges, i. e. diffusion processes that are conditioned to initialize and terminate at two given states.

Online Variational Filtering and Parameter Learning

1 code implementation NeurIPS 2021 Andrew Campbell, Yuyang Shi, Tom Rainforth, Arnaud Doucet

We present a variational method for online state estimation and parameter learning in state-space models (SSMs), a ubiquitous class of latent variable models for sequential data.

Conditionally Gaussian PAC-Bayes

1 code implementation22 Oct 2021 Eugenio Clerico, George Deligiannidis, Arnaud Doucet

Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent.

Learning Optimal Conformal Classifiers

no code implementations ICLR 2022 David Stutz, Krishnamurthy, Dvijotham, Ali Taylan Cemgil, Arnaud Doucet

However, using CP as a separate processing step after training prevents the underlying model from adapting to the prediction of confidence sets.

Medical Diagnosis

The Curse of Depth in Kernel Regime

no code implementations NeurIPS Workshop ICBINB 2021 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

Recent work by Jacot et al. (2018) has shown that training a neural network of any kind with gradient descent is strongly related to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK).

Mitigating Statistical Bias within Differentially Private Synthetic Data

no code implementations24 Aug 2021 Sahra Ghalebikesabi, Harrison Wilde, Jack Jewson, Arnaud Doucet, Sebastian Vollmer, Chris Holmes

Increasing interest in privacy-preserving machine learning has led to new and evolved approaches for generating private synthetic data from undisclosed real data.

Monte Carlo Variational Auto-Encoders

1 code implementation30 Jun 2021 Achille Thin, Nikita Kotelevskii, Arnaud Doucet, Alain Durmus, Eric Moulines, Maxim Panov

Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO).

Wide stochastic networks: Gaussian limit and PAC-Bayesian training

1 code implementation17 Jun 2021 Eugenio Clerico, George Deligiannidis, Arnaud Doucet

The limit of infinite width allows for substantial simplifications in the analytical study of overparameterized neural networks.

Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling

1 code implementation NeurIPS 2021 Valentin De Bortoli, James Thornton, Jeremy Heng, Arnaud Doucet

In contrast, solving the Schr\"odinger Bridge problem (SB), i. e. an entropy-regularized optimal transport problem on path spaces, yields diffusions which generate samples from the data distribution in finite time.

On Instrumental Variable Regression for Deep Offline Policy Evaluation

1 code implementation21 May 2021 Yutian Chen, Liyuan Xu, Caglar Gulcehre, Tom Le Paine, Arthur Gretton, Nando de Freitas, Arnaud Doucet

By applying different IV techniques to OPE, we are not only able to recover previously proposed OPE methods such as model-based techniques but also to obtain competitive new techniques.

NEO: Non Equilibrium Sampling on the Orbit of a Deterministic Transform

1 code implementation17 Mar 2021 Achille Thin, Yazid Janati, Sylvain Le Corff, Charles Ollion, Arnaud Doucet, Alain Durmus, Eric Moulines, Christian Robert

Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant Z are challenging problems.

COIN: COmpression with Implicit Neural representations

1 code implementation ICLR Workshop Neural_Compression 2021 Emilien Dupont, Adam Goliński, Milad Alizadeh, Yee Whye Teh, Arnaud Doucet

We propose a new simple approach for image compression: instead of storing the RGB values for each pixel of an image, we store the weights of a neural network overfitted to the image.

Data Compression Image Compression

Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding

1 code implementation ICLR Workshop Neural_Compression 2021 Yangjun Ruan, Karen Ullrich, Daniel Severo, James Townsend, Ashish Khisti, Arnaud Doucet, Alireza Makhzani, Chris J. Maddison

Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space.

Data Compression

Annealed Flow Transport Monte Carlo

1 code implementation15 Feb 2021 Michael Arbel, Alexander G. D. G. Matthews, Arnaud Doucet

Annealed Importance Sampling (AIS) and its Sequential Monte Carlo (SMC) extensions are state-of-the-art methods for estimating normalizing constants of probability distributions.

Differentiable Particle Filtering via Entropy-Regularized Optimal Transport

1 code implementation15 Feb 2021 Adrien Corenflos, James Thornton, George Deligiannidis, Arnaud Doucet

Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models.

Variational Inference

Generative Models as Distributions of Functions

1 code implementation9 Feb 2021 Emilien Dupont, Yee Whye Teh, Arnaud Doucet

By treating data points as functions, we can abstract away from the specific type of data we train on and construct models that are agnostic to discretization.

Stable ResNet

no code implementations24 Oct 2020 Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, Judith Rousseau

Deep ResNet architectures have achieved state of the art performance on many tasks.

Learning Deep Features in Instrumental Variable Regression

1 code implementation ICLR 2021 Liyuan Xu, Yutian Chen, Siddarth Srinivasan, Nando de Freitas, Arnaud Doucet, Arthur Gretton

We propose a novel method, deep feature instrumental variable regression (DFIV), to address the case where relations between instruments, treatments, and outcomes may be nonlinear.

Unbiased Gradient Estimation for Variational Auto-Encoders using Coupled Markov Chains

no code implementations5 Oct 2020 Francisco J. R. Ruiz, Michalis K. Titsias, Taylan Cemgil, Arnaud Doucet

The variational auto-encoder (VAE) is a deep latent variable model that has two neural networks in an autoencoder-like architecture; one of them parameterizes the model's likelihood.

Variational Inference with Continuously-Indexed Normalizing Flows

1 code implementation10 Jul 2020 Anthony Caterini, Rob Cornish, Dino Sejdinovic, Arnaud Doucet

Continuously-indexed flows (CIFs) have recently achieved improvements over baseline normalizing flows on a variety of density estimation tasks.

Bayesian Inference Density Estimation +1

Noisy Adaptive Group Testing using Bayesian Sequential Experimental Design

no code implementations26 Apr 2020 Marco Cuturi, Olivier Teboul, Quentin Berthet, Arnaud Doucet, Jean-Philippe Vert

Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting (tests can be mistaken) to decide adaptively (looking at past results) which groups to test next, with the goal to converge to a good detection, as quickly, and with as few tests as possible.

Experimental Design

Robust Pruning at Initialization

no code implementations ICLR 2021 Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, Yee Whye Teh

Overparameterized Neural Networks (NN) display state-of-the-art performance.

Schrödinger Bridge Samplers

no code implementations31 Dec 2019 Espen Bernton, Jeremy Heng, Arnaud Doucet, Pierre E. Jacob

This is achieved by iteratively modifying the transition kernels of the reference Markov chain to obtain a process whose marginal distribution at time $T$ becomes closer to $\pi_T = \pi$, via regression-based approximations of the corresponding iterative proportional fitting recursion.

Localised Generative Flows

no code implementations25 Sep 2019 Rob Cornish, Anthony Caterini, George Deligiannidis, Arnaud Doucet

We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem.

Density Estimation Normalising Flows

Modular Meta-Learning with Shrinkage

no code implementations NeurIPS 2020 Yutian Chen, Abram L. Friesen, Feryal Behbahani, Arnaud Doucet, David Budden, Matthew W. Hoffman, Nando de Freitas

Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task-specific components.

Image Classification Meta-Learning +2

Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks

no code implementations31 May 2019 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

Recent work by Jacot et al. (2018) has shown that training a neural network of any kind with gradient descent in parameter space is strongly related to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK).

Augmented Neural ODEs

5 code implementations NeurIPS 2019 Emilien Dupont, Arnaud Doucet, Yee Whye Teh

We show that Neural Ordinary Differential Equations (ODEs) learn representations that preserve the topology of the input space and prove that this implies the existence of functions Neural ODEs cannot represent.

Image Classification

Bernoulli Race Particle Filters

no code implementations3 Mar 2019 Sebastian M. Schmon, Arnaud Doucet, George Deligiannidis

When the weights in a particle filter are not available analytically, standard resampling methods cannot be employed.

On the Impact of the Activation Function on Deep Neural Networks Training

no code implementations19 Feb 2019 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure.

Unbiased Smoothing using Particle Independent Metropolis-Hastings

no code implementations5 Feb 2019 Lawrence Middleton, George Deligiannidis, Arnaud Doucet, Pierre E. Jacob

We consider the approximation of expectations with respect to the distribution of a latent Markov process given noisy measurements.

Scalable Metropolis-Hastings for Exact Bayesian Inference with Large Datasets

1 code implementation28 Jan 2019 Robert Cornish, Paul Vanetti, Alexandre Bouchard-Côté, George Deligiannidis, Arnaud Doucet

Bayesian inference via standard Markov Chain Monte Carlo (MCMC) methods is too computationally intensive to handle large datasets, since the cost per step usually scales like $\Theta(n)$ in the number of data points $n$.

Bayesian Inference

Hamiltonian Descent Methods

4 code implementations13 Sep 2018 Chris J. Maddison, Daniel Paulin, Yee Whye Teh, Brendan O'Donoghue, Arnaud Doucet

Yet, crucially the kinetic gradient map can be designed to incorporate information about the convex conjugate in a fashion that allows for linear convergence on convex functions that may be non-smooth or non-strongly convex.

Asymptotic Properties of Recursive Maximum Likelihood Estimation in Non-Linear State-Space Models

no code implementations25 Jun 2018 Vladislav Z. B. Tadic, Arnaud Doucet

Using stochastic gradient search and the optimal filter derivative, it is possible to perform recursive (i. e., online) maximum likelihood estimation in a non-linear state-space model.

Hamiltonian Variational Auto-Encoder

3 code implementations NeurIPS 2018 Anthony L. Caterini, Arnaud Doucet, Dino Sejdinovic

However, for this methodology to be practically efficient, it is necessary to obtain low-variance unbiased estimators of the ELBO and its gradients with respect to the parameters of interest.

Variational Inference

On the Selection of Initialization and Activation Function for Deep Neural Networks

no code implementations ICLR 2019 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

We complete this analysis by providing quantitative results showing that, for a class of ReLU-like activation functions, the information propagates indeed deeper for an initialization at the edge of chaos.

Clone MCMC: Parallel High-Dimensional Gaussian Gibbs Sampling

no code implementations NeurIPS 2017 Andrei-Cristian Barbos, Francois Caron, Jean-François Giovannelli, Arnaud Doucet

We propose a generalized Gibbs sampler algorithm for obtaining samples approximately distributed from a high-dimensional Gaussian distribution.

Asymptotic Bias of Stochastic Gradient Search

no code implementations30 Aug 2017 Vladislav B. Tadic, Arnaud Doucet

Relying on the same results, the asymptotic behavior of the recursive maximum split-likelihood estimation in hidden Markov models is analyzed, too.


Filtering Variational Objectives

3 code implementations NeurIPS 2017 Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, andriy mnih, Arnaud Doucet, Yee Whye Teh

When used as a surrogate objective for maximum likelihood estimation in latent variable models, the evidence lower bound (ELBO) produces state-of-the-art results.

Particle Value Functions

no code implementations16 Mar 2017 Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, andriy mnih, Yee Whye Teh

The policy gradients of the expected return objective can react slowly to rare rewards.


Piecewise Deterministic Markov Processes for Scalable Monte Carlo on Restricted Domains

4 code implementations16 Jan 2017 Joris Bierkens, Alexandre Bouchard-Côté, Arnaud Doucet, Andrew B. Duncan, Paul Fearnhead, Thibaut Lienart, Gareth Roberts, Sebastian J. Vollmer

Piecewise Deterministic Monte Carlo algorithms enable simulation from a posterior distribution, whilst only needing to access a sub-sample of data at each iteration.

Methodology Computation

Pseudo-Marginal Hamiltonian Monte Carlo

no code implementations8 Jul 2016 Johan Alenlöv, Arnaud Doucet, Fredrik Lindsten

When following a Markov chain Monte Carlo (MCMC) approach to approximate the posterior distribution in this context, one typically either uses MCMC schemes which target the joint posterior of the parameters and some auxiliary latent variables, or pseudo-marginal Metropolis--Hastings (MH) schemes.

Bayesian Inference

Interacting Particle Markov Chain Monte Carlo

1 code implementation16 Feb 2016 Tom Rainforth, Christian A. Naesseth, Fredrik Lindsten, Brooks Paige, Jan-Willem van de Meent, Arnaud Doucet, Frank Wood

We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers.

The Bouncy Particle Sampler: A Non-Reversible Rejection-Free Markov Chain Monte Carlo Method

3 code implementations8 Oct 2015 Alexandre Bouchard-Côté, Sebastian J. Vollmer, Arnaud Doucet

We explore and propose several original extensions of an alternative approach introduced recently in Peters and de With (2012) where the target distribution of interest is explored using a continuous-time Markov process.

Methodology Statistics Theory Statistics Theory

Gibbs Flow for Approximate Transport with Applications to Bayesian Computation

1 code implementation29 Sep 2015 Jeremy Heng, Arnaud Doucet, Yvo Pokern

Any measurable function $T:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ such that $Y=T(X)\sim\pi_{1}$ if $X\sim\pi_{0}$ is called a transport map from $\pi_{0}$ to $\pi_{1}$.


Expectation Particle Belief Propagation

1 code implementation NeurIPS 2015 Thibaut Lienart, Yee Whye Teh, Arnaud Doucet

The computational complexity of our algorithm at each iteration is quadratic in the number of particles.

On Markov chain Monte Carlo methods for tall data

1 code implementation11 May 2015 Rémi Bardenet, Arnaud Doucet, Chris Holmes

Finally, we have only been able so far to propose subsampling-based methods which display good performance in scenarios where the Bernstein-von Mises approximation of the target posterior distribution is excellent.

Bayesian Inference

Asynchronous Anytime Sequential Monte Carlo

no code implementations NeurIPS 2014 Brooks Paige, Frank Wood, Arnaud Doucet, Yee Whye Teh

We introduce a new sequential Monte Carlo algorithm we call the particle cascade.

Fast Computation of Wasserstein Barycenters

1 code implementation16 Oct 2013 Marco Cuturi, Arnaud Doucet

We present new algorithms to compute the mean of a set of empirical probability measures under the optimal transport metric.

Bayesian Nonparametric Models on Decomposable Graphs

no code implementations NeurIPS 2009 Francois Caron, Arnaud Doucet

In latent feature models, we associate to each data point a potentially infinite number of binary latent variables indicating the possession of some features and the IBP is a prior distribution on the associated infinite binary matrix.

Cannot find the paper you are looking for? You can Submit a new open access paper.