Search Results for author: Arnaud Doucet

Found 96 papers, 51 papers with code

Target Score Matching

no code implementations13 Feb 2024 Valentin De Bortoli, Michael Hutchinson, Peter Wirnsberger, Arnaud Doucet

Denoising Score Matching estimates the score of a noised version of a target distribution by minimizing a regression loss and is widely used to train the popular class of Denoising Diffusion Models.

Denoising regression

Marginal Density Ratio for Off-Policy Evaluation in Contextual Bandits

1 code implementation NeurIPS 2023 Muhammad Faaiz Taufiq, Arnaud Doucet, Rob Cornish, Jean-Francois Ton

Off-Policy Evaluation (OPE) in contextual bandits is crucial for assessing new policies using existing data without costly experimentation.

Causal Inference Multi-Armed Bandits +1

Diffusion Generative Inverse Design

no code implementations5 Sep 2023 Marin Vlastelica, Tatiana López-Guevara, Kelsey Allen, Peter Battaglia, Arnaud Doucet, Kimberley Stachenfeld

Inverse design refers to the problem of optimizing the input of an objective function in order to enact a target outcome.

Denoising

Nearly $d$-Linear Convergence Bounds for Diffusion Models via Stochastic Localization

no code implementations7 Aug 2023 Joe Benton, Valentin De Bortoli, Arnaud Doucet, George Deligiannidis

We provide the first convergence bounds which are linear in the data dimension (up to logarithmic factors) assuming only finite second moments of the data distribution.

Denoising

Conformal prediction under ambiguous ground truth

1 code implementation18 Jul 2023 David Stutz, Abhijit Guha Roy, Tatiana Matejovicova, Patricia Strachan, Ali Taylan Cemgil, Arnaud Doucet

However, in many real-world scenarios, the labels $Y_1,..., Y_n$ are obtained by aggregating expert opinions using a voting procedure, resulting in a one-hot distribution $\mathbb{P}_{vote}^{Y|X}$.

Conformal Prediction Uncertainty Quantification

A Unified Framework for U-Net Design and Analysis

1 code implementation NeurIPS 2023 Christopher Williams, Fabian Falck, George Deligiannidis, Chris Holmes, Arnaud Doucet, Saifuddin Syed

U-Nets are a go-to, state-of-the-art neural architecture across numerous tasks for continuous signals on a square such as images and Partial Differential Equations (PDE), however their design and architecture is understudied.

Image Segmentation Semantic Segmentation

Tree-Based Diffusion Schrödinger Bridge with Applications to Wasserstein Barycenters

1 code implementation NeurIPS 2023 Maxence Noble, Valentin De Bortoli, Arnaud Doucet, Alain Durmus

In this paper, we consider an entropic version of mOT with a tree-structured quadratic cost, i. e., a function that can be written as a sum of pairwise cost functions between the nodes of a tree.

Error Bounds for Flow Matching Methods

no code implementations26 May 2023 Joe Benton, George Deligiannidis, Arnaud Doucet

Previous work derived bounds on the approximation error of diffusion models under the stochastic sampling regime, given assumptions on the $L^2$ loss.

Denoising

Diffusion Schrödinger Bridge Matching

no code implementations NeurIPS 2023 Yuyang Shi, Valentin De Bortoli, Andrew Campbell, Arnaud Doucet

However, while it is desirable in many applications to approximate the deterministic dynamic Optimal Transport (OT) map which admits attractive properties, DDMs and FMMs are not guaranteed to provide transports close to the OT map.

Denoising

Denoising Diffusion Samplers

no code implementations27 Feb 2023 Francisco Vargas, Will Grathwohl, Arnaud Doucet

Denoising Diffusion Samplers (DDS) are obtained by approximating the corresponding time-reversal.

Denoising

Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC

2 code implementations22 Feb 2023 Yilun Du, Conor Durkan, Robin Strudel, Joshua B. Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, Will Grathwohl

In this work, we build upon these ideas using the score-based interpretation of diffusion models, and explore alternative ways to condition, modify, and reuse diffusion models for tasks involving compositional generation and guidance.

Text-to-Image Generation

SE(3) diffusion model with application to protein backbone generation

1 code implementation5 Feb 2023 Jason Yim, Brian L. Trippe, Valentin De Bortoli, Emile Mathieu, Arnaud Doucet, Regina Barzilay, Tommi Jaakkola

The design of novel protein structures remains a challenge in protein engineering for applications across biomedicine and chemistry.

Protein Structure Prediction

A Multi-Resolution Framework for U-Nets with Applications to Hierarchical VAEs

no code implementations19 Jan 2023 Fabian Falck, Christopher Williams, Dominic Danks, George Deligiannidis, Christopher Yau, Chris Holmes, Arnaud Doucet, Matthew Willetts

U-Net architectures are ubiquitous in state-of-the-art deep learning, however their regularisation properties and relationship to wavelets are understudied.

Causal Falsification of Digital Twins

1 code implementation17 Jan 2023 Rob Cornish, Muhammad Faaiz Taufiq, Arnaud Doucet, Chris Holmes

We consider how to assess the accuracy of a digital twin using real-world data.

Causal Inference

Particle-Based Score Estimation for State Space Model Learning in Autonomous Driving

no code implementations14 Dec 2022 Angad Singh, Omar Makhlouf, Maximilian Igl, Joao Messias, Arnaud Doucet, Shimon Whiteson

Recent methods addressing this problem typically differentiate through time in a particle filter, which requires workarounds to the non-differentiable resampling step, that yield biased or high variance gradient estimates.

Autonomous Driving

Continuous diffusion for categorical data

no code implementations28 Nov 2022 Sander Dieleman, Laurent Sartran, Arman Roshannai, Nikolay Savinov, Yaroslav Ganin, Pierre H. Richemond, Arnaud Doucet, Robin Strudel, Chris Dyer, Conor Durkan, Curtis Hawthorne, Rémi Leblond, Will Grathwohl, Jonas Adler

Diffusion models have quickly become the go-to paradigm for generative modelling of perceptual signals (such as images and sound) through iterative refinement.

Language Modelling

From Denoising Diffusions to Denoising Markov Models

1 code implementation7 Nov 2022 Joe Benton, Yuyang Shi, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet

We propose a unifying framework generalising this approach to a wide class of spaces and leading to an original extension of score matching.

Denoising

Maximum Likelihood Learning of Unnormalized Models for Simulation-Based Inference

1 code implementation26 Oct 2022 Pierre Glaser, Michael Arbel, Samo Hromadka, Arnaud Doucet, Arthur Gretton

We introduce two synthetic likelihood methods for Simulation-Based Inference (SBI), to conduct either amortized or targeted inference from experimental observations when a high-fidelity simulator is available.

Categorical SDEs with Simplex Diffusion

no code implementations26 Oct 2022 Pierre H. Richemond, Sander Dieleman, Arnaud Doucet

Diffusion models typically operate in the standard framework of generative modelling by producing continuously-valued datapoints.

Text Generation

Spectral Diffusion Processes

no code implementations28 Sep 2022 Angus Phillips, Thomas Seror, Michael Hutchinson, Valentin De Bortoli, Arnaud Doucet, Emile Mathieu

Score-based generative modelling (SGM) has proven to be a very effective method for modelling densities on finite-dimensional spaces.

Dimensionality Reduction

Generalisation under gradient descent via deterministic PAC-Bayes

no code implementations6 Sep 2022 Eugenio Clerico, Tyler Farghly, George Deligiannidis, Benjamin Guedj, Arnaud Doucet

We establish disintegrated PAC-Bayesian generalisation bounds for models trained with gradient descent methods or continuous gradient flows.

Score-Based Diffusion meets Annealed Importance Sampling

1 code implementation16 Aug 2022 Arnaud Doucet, Will Grathwohl, Alexander G. D. G. Matthews, Heiko Strathmann

To obtain an importance sampling estimate of the marginal likelihood, AIS introduces an extended target distribution to reweight the Markov chain proposal.

Riemannian Diffusion Schrödinger Bridge

no code implementations7 Jul 2022 James Thornton, Michael Hutchinson, Emile Mathieu, Valentin De Bortoli, Yee Whye Teh, Arnaud Doucet

Our proposed method generalizes Diffusion Schr\"odinger Bridge introduced in \cite{debortoli2021neurips} to the non-Euclidean setting and extends Riemannian score-based models beyond the first time reversal.

Density Estimation

An Empirical Study of Implicit Regularization in Deep Offline RL

no code implementations5 Jul 2022 Caglar Gulcehre, Srivatsan Srinivasan, Jakub Sygnowski, Georg Ostrovski, Mehrdad Farajtabar, Matt Hoffman, Razvan Pascanu, Arnaud Doucet

Also, we empirically identify three phases of learning that explain the impact of implicit regularization on the learning dynamics and found that bootstrapping alone is insufficient to explain the collapse of the effective rank.

Offline RL

Conformal Off-Policy Prediction in Contextual Bandits

no code implementations9 Jun 2022 Muhammad Faaiz Taufiq, Jean-Francois Ton, Rob Cornish, Yee Whye Teh, Arnaud Doucet

Most off-policy evaluation methods for contextual bandits have focused on the expected outcome of a policy, which is estimated via methods that at best provide only asymptotic guarantees.

Conformal Prediction Multi-Armed Bandits +1

A Continuous Time Framework for Discrete Denoising Models

1 code implementation30 May 2022 Andrew Campbell, Joe Benton, Valentin De Bortoli, Tom Rainforth, George Deligiannidis, Arnaud Doucet

We provide the first complete continuous time framework for denoising diffusion models of discrete data.

Denoising

Towards Learning Universal Hyperparameter Optimizers with Transformers

1 code implementation26 May 2022 Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Qiuyi Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marc'Aurelio Ranzato, Sagi Perel, Nando de Freitas

Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution.

Hyperparameter Optimization Meta-Learning

Chained Generalisation Bounds

no code implementations2 Mar 2022 Eugenio Clerico, Amitis Shidani, George Deligiannidis, Arnaud Doucet

This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique.

On PAC-Bayesian reconstruction guarantees for VAEs

no code implementations23 Feb 2022 Badr-Eddine Chérief-Abdellatif, Yuyang Shi, Arnaud Doucet, Benjamin Guedj

Despite its wide use and empirical successes, the theoretical understanding and study of the behaviour and performance of the variational autoencoder (VAE) have only emerged in the past few years.

Riemannian Score-Based Generative Modelling

2 code implementations6 Feb 2022 Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet

Score-based generative models (SGMs) are a powerful class of generative models that exhibit remarkable empirical performance.

Denoising

Importance Weighting Approach in Kernel Bayes' Rule

no code implementations5 Feb 2022 Liyuan Xu, Yutian Chen, Arnaud Doucet, Arthur Gretton

We study a nonparametric approach to Bayesian computation via feature means, where the expectation of prior features is updated to yield expected kernel posterior features, based on regression from learned neural net or kernel features of the observations.

Continual Repeated Annealed Flow Transport Monte Carlo

2 code implementations31 Jan 2022 Alexander G. D. G. Matthews, Michael Arbel, Danilo J. Rezende, Arnaud Doucet

We propose Continual Repeated Annealed Flow Transport Monte Carlo (CRAFT), a method that combines a sequential Monte Carlo (SMC) sampler (itself a generalization of Annealed Importance Sampling) with variational inference using normalizing flows.

Variational Inference

COIN++: Neural Compression Across Modalities

1 code implementation30 Jan 2022 Emilien Dupont, Hrushikesh Loya, Milad Alizadeh, Adam Goliński, Yee Whye Teh, Arnaud Doucet

Neural compression algorithms are typically based on autoencoders that require specialized encoder and decoder architectures for different data modalities.

NEO: Non Equilibrium Sampling on the Orbits of a Deterministic Transform

1 code implementation NeurIPS 2021 Achille Thin, Yazid Janati El Idrissi, Sylvain Le Corff, Charles Ollion, Eric Moulines, Arnaud Doucet, Alain Durmus, Christian Robert

Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant $\mathrm{Z}$ are challenging problems.

Simulating Diffusion Bridges with Score Matching

1 code implementation14 Nov 2021 Jeremy Heng, Valentin De Bortoli, Arnaud Doucet, James Thornton

This is known to be a challenging problem that has received much attention in the last two decades.

Econometrics

Online Variational Filtering and Parameter Learning

1 code implementation NeurIPS 2021 Andrew Campbell, Yuyang Shi, Tom Rainforth, Arnaud Doucet

We present a variational method for online state estimation and parameter learning in state-space models (SSMs), a ubiquitous class of latent variable models for sequential data.

Conditionally Gaussian PAC-Bayes

1 code implementation22 Oct 2021 Eugenio Clerico, George Deligiannidis, Arnaud Doucet

Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent.

Learning Optimal Conformal Classifiers

2 code implementations ICLR 2022 David Stutz, Krishnamurthy, Dvijotham, Ali Taylan Cemgil, Arnaud Doucet

However, using CP as a separate processing step after training prevents the underlying model from adapting to the prediction of confidence sets.

Conformal Prediction Medical Diagnosis

The Curse of Depth in Kernel Regime

no code implementations NeurIPS Workshop ICBINB 2021 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

Recent work by Jacot et al. (2018) has shown that training a neural network of any kind with gradient descent is strongly related to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK).

Mitigating Statistical Bias within Differentially Private Synthetic Data

no code implementations24 Aug 2021 Sahra Ghalebikesabi, Harrison Wilde, Jack Jewson, Arnaud Doucet, Sebastian Vollmer, Chris Holmes

Increasing interest in privacy-preserving machine learning has led to new and evolved approaches for generating private synthetic data from undisclosed real data.

Privacy Preserving

Monte Carlo Variational Auto-Encoders

2 code implementations30 Jun 2021 Achille Thin, Nikita Kotelevskii, Arnaud Doucet, Alain Durmus, Eric Moulines, Maxim Panov

Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO).

Wide stochastic networks: Gaussian limit and PAC-Bayesian training

1 code implementation17 Jun 2021 Eugenio Clerico, George Deligiannidis, Arnaud Doucet

The limit of infinite width allows for substantial simplifications in the analytical study of over-parameterised neural networks.

Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling

2 code implementations NeurIPS 2021 Valentin De Bortoli, James Thornton, Jeremy Heng, Arnaud Doucet

In contrast, solving the Schr\"odinger Bridge problem (SB), i. e. an entropy-regularized optimal transport problem on path spaces, yields diffusions which generate samples from the data distribution in finite time.

On Instrumental Variable Regression for Deep Offline Policy Evaluation

1 code implementation21 May 2021 Yutian Chen, Liyuan Xu, Caglar Gulcehre, Tom Le Paine, Arthur Gretton, Nando de Freitas, Arnaud Doucet

By applying different IV techniques to OPE, we are not only able to recover previously proposed OPE methods such as model-based techniques but also to obtain competitive new techniques.

regression Reinforcement Learning (RL)

NEO: Non Equilibrium Sampling on the Orbit of a Deterministic Transform

1 code implementation17 Mar 2021 Achille Thin, Yazid Janati, Sylvain Le Corff, Charles Ollion, Arnaud Doucet, Alain Durmus, Eric Moulines, Christian Robert

Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant Z are challenging problems.

COIN: COmpression with Implicit Neural representations

1 code implementation ICLR Workshop Neural_Compression 2021 Emilien Dupont, Adam Goliński, Milad Alizadeh, Yee Whye Teh, Arnaud Doucet

We propose a new simple approach for image compression: instead of storing the RGB values for each pixel of an image, we store the weights of a neural network overfitted to the image.

Data Compression Image Compression

Improving Lossless Compression Rates via Monte Carlo Bits-Back Coding

1 code implementation ICLR Workshop Neural_Compression 2021 Yangjun Ruan, Karen Ullrich, Daniel Severo, James Townsend, Ashish Khisti, Arnaud Doucet, Alireza Makhzani, Chris J. Maddison

Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space.

Data Compression

Differentiable Particle Filtering via Entropy-Regularized Optimal Transport

1 code implementation15 Feb 2021 Adrien Corenflos, James Thornton, George Deligiannidis, Arnaud Doucet

Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models.

Variational Inference

Annealed Flow Transport Monte Carlo

3 code implementations15 Feb 2021 Michael Arbel, Alexander G. D. G. Matthews, Arnaud Doucet

Annealed Importance Sampling (AIS) and its Sequential Monte Carlo (SMC) extensions are state-of-the-art methods for estimating normalizing constants of probability distributions.

Generative Models as Distributions of Functions

1 code implementation9 Feb 2021 Emilien Dupont, Yee Whye Teh, Arnaud Doucet

By treating data points as functions, we can abstract away from the specific type of data we train on and construct models that are agnostic to discretization.

Stable ResNet

no code implementations24 Oct 2020 Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, Judith Rousseau

Deep ResNet architectures have achieved state of the art performance on many tasks.

Learning Deep Features in Instrumental Variable Regression

1 code implementation ICLR 2021 Liyuan Xu, Yutian Chen, Siddarth Srinivasan, Nando de Freitas, Arnaud Doucet, Arthur Gretton

We propose a novel method, deep feature instrumental variable regression (DFIV), to address the case where relations between instruments, treatments, and outcomes may be nonlinear.

regression

Unbiased Gradient Estimation for Variational Auto-Encoders using Coupled Markov Chains

no code implementations5 Oct 2020 Francisco J. R. Ruiz, Michalis K. Titsias, Taylan Cemgil, Arnaud Doucet

The variational auto-encoder (VAE) is a deep latent variable model that has two neural networks in an autoencoder-like architecture; one of them parameterizes the model's likelihood.

Variational Inference with Continuously-Indexed Normalizing Flows

1 code implementation10 Jul 2020 Anthony Caterini, Rob Cornish, Dino Sejdinovic, Arnaud Doucet

Continuously-indexed flows (CIFs) have recently achieved improvements over baseline normalizing flows on a variety of density estimation tasks.

Bayesian Inference Density Estimation +1

Noisy Adaptive Group Testing using Bayesian Sequential Experimental Design

1 code implementation26 Apr 2020 Marco Cuturi, Olivier Teboul, Quentin Berthet, Arnaud Doucet, Jean-Philippe Vert

Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting (tests can be mistaken) to decide adaptively (looking at past results) which groups to test next, with the goal to converge to a good detection, as quickly, and with as few tests as possible.

Experimental Design

Robust Pruning at Initialization

no code implementations ICLR 2021 Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, Yee Whye Teh

Overparameterized Neural Networks (NN) display state-of-the-art performance.

Schrödinger Bridge Samplers

no code implementations31 Dec 2019 Espen Bernton, Jeremy Heng, Arnaud Doucet, Pierre E. Jacob

This is achieved by iteratively modifying the transition kernels of the reference Markov chain to obtain a process whose marginal distribution at time $T$ becomes closer to $\pi_T = \pi$, via regression-based approximations of the corresponding iterative proportional fitting recursion.

Localised Generative Flows

no code implementations25 Sep 2019 Rob Cornish, Anthony Caterini, George Deligiannidis, Arnaud Doucet

We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem.

Density Estimation Normalising Flows

Modular Meta-Learning with Shrinkage

no code implementations NeurIPS 2020 Yutian Chen, Abram L. Friesen, Feryal Behbahani, Arnaud Doucet, David Budden, Matthew W. Hoffman, Nando de Freitas

Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task-specific components.

Image Classification Meta-Learning +2

Mean-field Behaviour of Neural Tangent Kernel for Deep Neural Networks

no code implementations31 May 2019 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

Recent work by Jacot et al. (2018) has shown that training a neural network of any kind with gradient descent in parameter space is strongly related to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK).

Augmented Neural ODEs

6 code implementations NeurIPS 2019 Emilien Dupont, Arnaud Doucet, Yee Whye Teh

We show that Neural Ordinary Differential Equations (ODEs) learn representations that preserve the topology of the input space and prove that this implies the existence of functions Neural ODEs cannot represent.

Image Classification

Bernoulli Race Particle Filters

no code implementations3 Mar 2019 Sebastian M. Schmon, Arnaud Doucet, George Deligiannidis

When the weights in a particle filter are not available analytically, standard resampling methods cannot be employed.

valid

On the Impact of the Activation Function on Deep Neural Networks Training

no code implementations19 Feb 2019 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure.

Unbiased Smoothing using Particle Independent Metropolis-Hastings

no code implementations5 Feb 2019 Lawrence Middleton, George Deligiannidis, Arnaud Doucet, Pierre E. Jacob

We consider the approximation of expectations with respect to the distribution of a latent Markov process given noisy measurements.

Scalable Metropolis-Hastings for Exact Bayesian Inference with Large Datasets

1 code implementation28 Jan 2019 Robert Cornish, Paul Vanetti, Alexandre Bouchard-Côté, George Deligiannidis, Arnaud Doucet

Bayesian inference via standard Markov Chain Monte Carlo (MCMC) methods is too computationally intensive to handle large datasets, since the cost per step usually scales like $\Theta(n)$ in the number of data points $n$.

Bayesian Inference

Hamiltonian Descent Methods

4 code implementations13 Sep 2018 Chris J. Maddison, Daniel Paulin, Yee Whye Teh, Brendan O'Donoghue, Arnaud Doucet

Yet, crucially the kinetic gradient map can be designed to incorporate information about the convex conjugate in a fashion that allows for linear convergence on convex functions that may be non-smooth or non-strongly convex.

Asymptotic Properties of Recursive Maximum Likelihood Estimation in Non-Linear State-Space Models

no code implementations25 Jun 2018 Vladislav Z. B. Tadic, Arnaud Doucet

Using stochastic gradient search and the optimal filter derivative, it is possible to perform recursive (i. e., online) maximum likelihood estimation in a non-linear state-space model.

Hamiltonian Variational Auto-Encoder

3 code implementations NeurIPS 2018 Anthony L. Caterini, Arnaud Doucet, Dino Sejdinovic

However, for this methodology to be practically efficient, it is necessary to obtain low-variance unbiased estimators of the ELBO and its gradients with respect to the parameters of interest.

Variational Inference

On the Selection of Initialization and Activation Function for Deep Neural Networks

no code implementations ICLR 2019 Soufiane Hayou, Arnaud Doucet, Judith Rousseau

We complete this analysis by providing quantitative results showing that, for a class of ReLU-like activation functions, the information propagates indeed deeper for an initialization at the edge of chaos.

Clone MCMC: Parallel High-Dimensional Gaussian Gibbs Sampling

no code implementations NeurIPS 2017 Andrei-Cristian Barbos, Francois Caron, Jean-François Giovannelli, Arnaud Doucet

We propose a generalized Gibbs sampler algorithm for obtaining samples approximately distributed from a high-dimensional Gaussian distribution.

Vocal Bursts Intensity Prediction

Asymptotic Bias of Stochastic Gradient Search

no code implementations30 Aug 2017 Vladislav B. Tadic, Arnaud Doucet

Relying on the same results, the asymptotic behavior of the recursive maximum split-likelihood estimation in hidden Markov models is analyzed, too.

reinforcement-learning Reinforcement Learning (RL)

Filtering Variational Objectives

3 code implementations NeurIPS 2017 Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, andriy mnih, Arnaud Doucet, Yee Whye Teh

When used as a surrogate objective for maximum likelihood estimation in latent variable models, the evidence lower bound (ELBO) produces state-of-the-art results.

Piecewise Deterministic Markov Processes for Scalable Monte Carlo on Restricted Domains

4 code implementations16 Jan 2017 Joris Bierkens, Alexandre Bouchard-Côté, Arnaud Doucet, Andrew B. Duncan, Paul Fearnhead, Thibaut Lienart, Gareth Roberts, Sebastian J. Vollmer

Piecewise Deterministic Monte Carlo algorithms enable simulation from a posterior distribution, whilst only needing to access a sub-sample of data at each iteration.

Methodology Computation

Pseudo-Marginal Hamiltonian Monte Carlo

no code implementations8 Jul 2016 Johan Alenlöv, Arnaud Doucet, Fredrik Lindsten

When following a Markov chain Monte Carlo (MCMC) approach to approximate the posterior distribution in this context, one typically either uses MCMC schemes which target the joint posterior of the parameters and some auxiliary latent variables, or pseudo-marginal Metropolis--Hastings (MH) schemes.

Bayesian Inference

Interacting Particle Markov Chain Monte Carlo

1 code implementation16 Feb 2016 Tom Rainforth, Christian A. Naesseth, Fredrik Lindsten, Brooks Paige, Jan-Willem van de Meent, Arnaud Doucet, Frank Wood

We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers.

The Bouncy Particle Sampler: A Non-Reversible Rejection-Free Markov Chain Monte Carlo Method

3 code implementations8 Oct 2015 Alexandre Bouchard-Côté, Sebastian J. Vollmer, Arnaud Doucet

We explore and propose several original extensions of an alternative approach introduced recently in Peters and de With (2012) where the target distribution of interest is explored using a continuous-time Markov process.

Methodology Statistics Theory Statistics Theory

Gibbs Flow for Approximate Transport with Applications to Bayesian Computation

1 code implementation29 Sep 2015 Jeremy Heng, Arnaud Doucet, Yvo Pokern

Any measurable function $T:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ such that $Y=T(X)\sim\pi_{1}$ if $X\sim\pi_{0}$ is called a transport map from $\pi_{0}$ to $\pi_{1}$.

Computation

Expectation Particle Belief Propagation

1 code implementation NeurIPS 2015 Thibaut Lienart, Yee Whye Teh, Arnaud Doucet

The computational complexity of our algorithm at each iteration is quadratic in the number of particles.

On Markov chain Monte Carlo methods for tall data

1 code implementation11 May 2015 Rémi Bardenet, Arnaud Doucet, Chris Holmes

Finally, we have only been able so far to propose subsampling-based methods which display good performance in scenarios where the Bernstein-von Mises approximation of the target posterior distribution is excellent.

Bayesian Inference

Asynchronous Anytime Sequential Monte Carlo

no code implementations NeurIPS 2014 Brooks Paige, Frank Wood, Arnaud Doucet, Yee Whye Teh

We introduce a new sequential Monte Carlo algorithm we call the particle cascade.

Fast Computation of Wasserstein Barycenters

2 code implementations16 Oct 2013 Marco Cuturi, Arnaud Doucet

We present new algorithms to compute the mean of a set of empirical probability measures under the optimal transport metric.

Constrained Clustering

Bayesian Nonparametric Models on Decomposable Graphs

no code implementations NeurIPS 2009 Francois Caron, Arnaud Doucet

In latent feature models, we associate to each data point a potentially infinite number of binary latent variables indicating the possession of some features and the IBP is a prior distribution on the associated infinite binary matrix.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.