no code implementations • 5 Sep 2023 • Marin Vlastelica, Tatiana López-Guevara, Kelsey Allen, Peter Battaglia, Arnaud Doucet, Kimberley Stachenfeld
Inverse design refers to the problem of optimizing the input of an objective function in order to enact a target outcome.
no code implementations • 17 Aug 2023 • Caglar Gulcehre, Tom Le Paine, Srivatsan Srinivasan, Ksenia Konyushkova, Lotte Weerts, Abhishek Sharma, Aditya Siddhant, Alex Ahern, Miaosen Wang, Chenjie Gu, Wolfgang Macherey, Arnaud Doucet, Orhan Firat, Nando de Freitas
Reinforcement learning from human feedback (RLHF) can improve the quality of large language model's (LLM) outputs by aligning them with human preferences.
no code implementations • 7 Aug 2023 • Joe Benton, Valentin De Bortoli, Arnaud Doucet, George Deligiannidis
We provide the first convergence bounds which are linear in the data dimension (up to logarithmic factors) assuming only finite second moments of the data distribution.
no code implementations • 18 Jul 2023 • David Stutz, Abhijit Guha Roy, Tatiana Matejovicova, Patricia Strachan, Ali Taylan Cemgil, Arnaud Doucet
However, in many real-world scenarios, the labels $Y_1,..., Y_n$ are obtained by aggregating expert opinions using a voting procedure, resulting in a one-hot distribution $\mathbb{P}_{vote}^{Y|X}$.
no code implementations • 5 Jul 2023 • David Stutz, Ali Taylan Cemgil, Abhijit Guha Roy, Tatiana Matejovicova, Melih Barsbey, Patricia Strachan, Mike Schaekermann, Jan Freyberg, Rajeev Rikhye, Beverly Freeman, Javier Perez Matos, Umesh Telang, Dale R. Webster, YuAn Liu, Greg S. Corrado, Yossi Matias, Pushmeet Kohli, Yun Liu, Arnaud Doucet, Alan Karthikesalingam
In contrast, we propose a framework where aggregation is done using a statistical model.
no code implementations • 26 May 2023 • Joe Benton, George Deligiannidis, Arnaud Doucet
Previous work derived bounds on the approximation error of diffusion models under the stochastic sampling regime, given assumptions on the $L^2$ loss.
1 code implementation • 25 May 2023 • Andrew Campbell, William Harvey, Christian Weilbach, Valentin De Bortoli, Tom Rainforth, Arnaud Doucet
We propose a new class of generative models that naturally handle data of varying dimensionality by jointly modeling the state and dimension of each datapoint.
1 code implementation • 29 Mar 2023 • Yuyang Shi, Valentin De Bortoli, Andrew Campbell, Arnaud Doucet
However, while it is desirable in many applications to approximate the deterministic dynamic Optimal Transport (OT) map which admits attractive properties, DDMs and FMMs are not guaranteed to provide transports close to the OT map.
no code implementations • 27 Feb 2023 • Francisco Vargas, Will Grathwohl, Arnaud Doucet
Denoising Diffusion Samplers (DDS) are obtained by approximating the corresponding time-reversal.
2 code implementations • 22 Feb 2023 • Yilun Du, Conor Durkan, Robin Strudel, Joshua B. Tenenbaum, Sander Dieleman, Rob Fergus, Jascha Sohl-Dickstein, Arnaud Doucet, Will Grathwohl
In this work, we build upon these ideas using the score-based interpretation of diffusion models, and explore alternative ways to condition, modify, and reuse diffusion models for tasks involving compositional generation and guidance.
1 code implementation • 5 Feb 2023 • Jason Yim, Brian L. Trippe, Valentin De Bortoli, Emile Mathieu, Arnaud Doucet, Regina Barzilay, Tommi Jaakkola
The design of novel protein structures remains a challenge in protein engineering for applications across biomedicine and chemistry.
no code implementations • 19 Jan 2023 • Fabian Falck, Christopher Williams, Dominic Danks, George Deligiannidis, Christopher Yau, Chris Holmes, Arnaud Doucet, Matthew Willetts
U-Net architectures are ubiquitous in state-of-the-art deep learning, however their regularisation properties and relationship to wavelets are understudied.
1 code implementation • 17 Jan 2023 • Rob Cornish, Muhammad Faaiz Taufiq, Arnaud Doucet, Chris Holmes
We consider how to assess the accuracy of a digital twin using real-world data.
no code implementations • 14 Dec 2022 • Angad Singh, Omar Makhlouf, Maximilian Igl, Joao Messias, Arnaud Doucet, Shimon Whiteson
Recent methods addressing this problem typically differentiate through time in a particle filter, which requires workarounds to the non-differentiable resampling step, that yield biased or high variance gradient estimates.
no code implementations • 28 Nov 2022 • Sander Dieleman, Laurent Sartran, Arman Roshannai, Nikolay Savinov, Yaroslav Ganin, Pierre H. Richemond, Arnaud Doucet, Robin Strudel, Chris Dyer, Conor Durkan, Curtis Hawthorne, Rémi Leblond, Will Grathwohl, Jonas Adler
Diffusion models have quickly become the go-to paradigm for generative modelling of perceptual signals (such as images and sound) through iterative refinement.
1 code implementation • 7 Nov 2022 • Joe Benton, Yuyang Shi, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
We propose a unifying framework generalising this approach to a wide class of spaces and leading to an original extension of score matching.
no code implementations • 26 Oct 2022 • Pierre H. Richemond, Sander Dieleman, Arnaud Doucet
Diffusion models typically operate in the standard framework of generative modelling by producing continuously-valued datapoints.
1 code implementation • 26 Oct 2022 • Pierre Glaser, Michael Arbel, Samo Hromadka, Arnaud Doucet, Arthur Gretton
We introduce two synthetic likelihood methods for Simulation-Based Inference (SBI), to conduct either amortized or targeted inference from experimental observations when a high-fidelity simulator is available.
no code implementations • NeurIPS 2023 • Kamélia Daudel, Joe Benton, Yuyang Shi, Arnaud Doucet
We then provide two complementary theoretical analyses of the VR-IWAE bound and thus of the standard IWAE bound.
no code implementations • 28 Sep 2022 • Angus Phillips, Thomas Seror, Michael Hutchinson, Valentin De Bortoli, Arnaud Doucet, Emile Mathieu
Score-based generative modelling (SGM) has proven to be a very effective method for modelling densities on finite-dimensional spaces.
no code implementations • 6 Sep 2022 • Eugenio Clerico, Tyler Farghly, George Deligiannidis, Benjamin Guedj, Arnaud Doucet
We establish disintegrated PAC-Bayesian generalisation bounds for models trained with gradient descent methods or continuous gradient flows.
no code implementations • 16 Aug 2022 • Arnaud Doucet, Will Grathwohl, Alexander G. D. G. Matthews, Heiko Strathmann
To obtain an importance sampling estimate of the marginal likelihood, AIS introduces an extended target distribution to reweight the Markov chain proposal.
no code implementations • 7 Jul 2022 • James Thornton, Michael Hutchinson, Emile Mathieu, Valentin De Bortoli, Yee Whye Teh, Arnaud Doucet
Our proposed method generalizes Diffusion Schr\"odinger Bridge introduced in \cite{debortoli2021neurips} to the non-Euclidean setting and extends Riemannian score-based models beyond the first time reversal.
no code implementations • 5 Jul 2022 • Caglar Gulcehre, Srivatsan Srinivasan, Jakub Sygnowski, Georg Ostrovski, Mehrdad Farajtabar, Matt Hoffman, Razvan Pascanu, Arnaud Doucet
Also, we empirically identify three phases of learning that explain the impact of implicit regularization on the learning dynamics and found that bootstrapping alone is insufficient to explain the collapse of the effective rank.
no code implementations • 30 Jun 2022 • Amitis Shidani, George Deligiannidis, Arnaud Doucet
We study a ranking problem in the contextual multi-armed bandit setting.
no code implementations • 9 Jun 2022 • Muhammad Faaiz Taufiq, Jean-Francois Ton, Rob Cornish, Yee Whye Teh, Arnaud Doucet
Most off-policy evaluation methods for contextual bandits have focused on the expected outcome of a policy, which is estimated via methods that at best provide only asymptotic guarantees.
1 code implementation • 30 May 2022 • Andrew Campbell, Joe Benton, Valentin De Bortoli, Tom Rainforth, George Deligiannidis, Arnaud Doucet
We provide the first complete continuous time framework for denoising diffusion models of discrete data.
1 code implementation • 26 May 2022 • Yutian Chen, Xingyou Song, Chansoo Lee, Zi Wang, Qiuyi Zhang, David Dohan, Kazuya Kawakami, Greg Kochanski, Arnaud Doucet, Marc'Aurelio Ranzato, Sagi Perel, Nando de Freitas
Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution.
no code implementations • 2 Mar 2022 • Eugenio Clerico, Amitis Shidani, George Deligiannidis, Arnaud Doucet
This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique.
1 code implementation • 27 Feb 2022 • Yuyang Shi, Valentin De Bortoli, George Deligiannidis, Arnaud Doucet
We extend the Schr\"odinger bridge framework to conditional simulation.
no code implementations • 23 Feb 2022 • Badr-Eddine Chérief-Abdellatif, Yuyang Shi, Arnaud Doucet, Benjamin Guedj
Despite its wide use and empirical successes, the theoretical understanding and study of the behaviour and performance of the variational autoencoder (VAE) have only emerged in the past few years.
1 code implementation • 6 Feb 2022 • Valentin De Bortoli, Emile Mathieu, Michael Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet
Score-based generative models (SGMs) are a powerful class of generative models that exhibit remarkable empirical performance.
no code implementations • 5 Feb 2022 • Liyuan Xu, Yutian Chen, Arnaud Doucet, Arthur Gretton
We study a nonparametric approach to Bayesian computation via feature means, where the expectation of prior features is updated to yield expected kernel posterior features, based on regression from learned neural net or kernel features of the observations.
2 code implementations • 31 Jan 2022 • Alexander G. D. G. Matthews, Michael Arbel, Danilo J. Rezende, Arnaud Doucet
We propose Continual Repeated Annealed Flow Transport Monte Carlo (CRAFT), a method that combines a sequential Monte Carlo (SMC) sampler (itself a generalization of Annealed Importance Sampling) with variational inference using normalizing flows.
1 code implementation • 30 Jan 2022 • Emilien Dupont, Hrushikesh Loya, Milad Alizadeh, Adam Goliński, Yee Whye Teh, Arnaud Doucet
Neural compression algorithms are typically based on autoencoders that require specialized encoder and decoder architectures for different data modalities.
1 code implementation • NeurIPS 2021 • Achille Thin, Yazid Janati El Idrissi, Sylvain Le Corff, Charles Ollion, Eric Moulines, Arnaud Doucet, Alain Durmus, Christian Robert
Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant $\mathrm{Z}$ are challenging problems.
1 code implementation • 14 Nov 2021 • Jeremy Heng, Valentin De Bortoli, Arnaud Doucet, James Thornton
This is known to be a challenging problem that has received much attention in the last two decades.
1 code implementation • NeurIPS 2021 • Andrew Campbell, Yuyang Shi, Tom Rainforth, Arnaud Doucet
We present a variational method for online state estimation and parameter learning in state-space models (SSMs), a ubiquitous class of latent variable models for sequential data.
1 code implementation • 22 Oct 2021 • Eugenio Clerico, George Deligiannidis, Arnaud Doucet
Recent studies have empirically investigated different methods to train stochastic neural networks on a classification task by optimising a PAC-Bayesian bound via stochastic gradient descent.
1 code implementation • ICLR 2022 • David Stutz, Krishnamurthy, Dvijotham, Ali Taylan Cemgil, Arnaud Doucet
However, using CP as a separate processing step after training prevents the underlying model from adapting to the prediction of confidence sets.
no code implementations • NeurIPS Workshop ICBINB 2021 • Soufiane Hayou, Arnaud Doucet, Judith Rousseau
Recent work by Jacot et al. (2018) has shown that training a neural network of any kind with gradient descent is strongly related to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK).
no code implementations • 24 Aug 2021 • Sahra Ghalebikesabi, Harrison Wilde, Jack Jewson, Arnaud Doucet, Sebastian Vollmer, Chris Holmes
Increasing interest in privacy-preserving machine learning has led to new and evolved approaches for generating private synthetic data from undisclosed real data.
no code implementations • 18 Aug 2021 • George Deligiannidis, Valentin De Bortoli, Arnaud Doucet
We establish the uniform in time stability, w. r. t.
2 code implementations • 30 Jun 2021 • Achille Thin, Nikita Kotelevskii, Arnaud Doucet, Alain Durmus, Eric Moulines, Maxim Panov
Variational auto-encoders (VAE) are popular deep latent variable models which are trained by maximizing an Evidence Lower Bound (ELBO).
1 code implementation • 17 Jun 2021 • Eugenio Clerico, George Deligiannidis, Arnaud Doucet
The limit of infinite width allows for substantial simplifications in the analytical study of over-parameterised neural networks.
2 code implementations • NeurIPS 2021 • Valentin De Bortoli, James Thornton, Jeremy Heng, Arnaud Doucet
In contrast, solving the Schr\"odinger Bridge problem (SB), i. e. an entropy-regularized optimal transport problem on path spaces, yields diffusions which generate samples from the data distribution in finite time.
1 code implementation • 21 May 2021 • Yutian Chen, Liyuan Xu, Caglar Gulcehre, Tom Le Paine, Arthur Gretton, Nando de Freitas, Arnaud Doucet
By applying different IV techniques to OPE, we are not only able to recover previously proposed OPE methods such as model-based techniques but also to obtain competitive new techniques.
1 code implementation • 17 Mar 2021 • Achille Thin, Yazid Janati, Sylvain Le Corff, Charles Ollion, Arnaud Doucet, Alain Durmus, Eric Moulines, Christian Robert
Sampling from a complex distribution $\pi$ and approximating its intractable normalizing constant Z are challenging problems.
1 code implementation • ICLR Workshop Neural_Compression 2021 • Emilien Dupont, Adam Goliński, Milad Alizadeh, Yee Whye Teh, Arnaud Doucet
We propose a new simple approach for image compression: instead of storing the RGB values for each pixel of an image, we store the weights of a neural network overfitted to the image.
1 code implementation • ICLR Workshop Neural_Compression 2021 • Yangjun Ruan, Karen Ullrich, Daniel Severo, James Townsend, Ashish Khisti, Arnaud Doucet, Alireza Makhzani, Chris J. Maddison
Naively applied, our schemes would require more initial bits than the standard bits-back coder, but we show how to drastically reduce this additional cost with couplings in the latent space.
1 code implementation • 15 Feb 2021 • Adrien Corenflos, James Thornton, George Deligiannidis, Arnaud Doucet
Particle Filtering (PF) methods are an established class of procedures for performing inference in non-linear state-space models.
3 code implementations • 15 Feb 2021 • Michael Arbel, Alexander G. D. G. Matthews, Arnaud Doucet
Annealed Importance Sampling (AIS) and its Sequential Monte Carlo (SMC) extensions are state-of-the-art methods for estimating normalizing constants of probability distributions.
1 code implementation • 9 Feb 2021 • Emilien Dupont, Yee Whye Teh, Arnaud Doucet
By treating data points as functions, we can abstract away from the specific type of data we train on and construct models that are agnostic to discretization.
no code implementations • 24 Oct 2020 • Soufiane Hayou, Eugenio Clerico, Bobby He, George Deligiannidis, Arnaud Doucet, Judith Rousseau
Deep ResNet architectures have achieved state of the art performance on many tasks.
1 code implementation • ICLR 2021 • Liyuan Xu, Yutian Chen, Siddarth Srinivasan, Nando de Freitas, Arnaud Doucet, Arthur Gretton
We propose a novel method, deep feature instrumental variable regression (DFIV), to address the case where relations between instruments, treatments, and outcomes may be nonlinear.
no code implementations • 5 Oct 2020 • Francisco J. R. Ruiz, Michalis K. Titsias, Taylan Cemgil, Arnaud Doucet
The variational auto-encoder (VAE) is a deep latent variable model that has two neural networks in an autoencoder-like architecture; one of them parameterizes the model's likelihood.
1 code implementation • 10 Jul 2020 • Anthony Caterini, Rob Cornish, Dino Sejdinovic, Arnaud Doucet
Continuously-indexed flows (CIFs) have recently achieved improvements over baseline normalizing flows on a variety of density estimation tasks.
1 code implementation • 26 Apr 2020 • Marco Cuturi, Olivier Teboul, Quentin Berthet, Arnaud Doucet, Jean-Philippe Vert
Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting (tests can be mistaken) to decide adaptively (looking at past results) which groups to test next, with the goal to converge to a good detection, as quickly, and with as few tests as possible.
no code implementations • ICLR 2021 • Soufiane Hayou, Jean-Francois Ton, Arnaud Doucet, Yee Whye Teh
Overparameterized Neural Networks (NN) display state-of-the-art performance.
no code implementations • 31 Dec 2019 • Espen Bernton, Jeremy Heng, Arnaud Doucet, Pierre E. Jacob
This is achieved by iteratively modifying the transition kernels of the reference Markov chain to obtain a process whose marginal distribution at time $T$ becomes closer to $\pi_T = \pi$, via regression-based approximations of the corresponding iterative proportional fitting recursion.
3 code implementations • ICML 2020 • Rob Cornish, Anthony L. Caterini, George Deligiannidis, Arnaud Doucet
We show that normalising flows become pathological when used to model targets whose supports have complicated topologies.
no code implementations • 25 Sep 2019 • Rob Cornish, Anthony Caterini, George Deligiannidis, Arnaud Doucet
We argue that flow-based density models based on continuous bijections are limited in their ability to learn target distributions with complicated topologies, and propose localised generative flows (LGFs) to address this problem.
no code implementations • NeurIPS 2020 • Yutian Chen, Abram L. Friesen, Feryal Behbahani, Arnaud Doucet, David Budden, Matthew W. Hoffman, Nando de Freitas
Many real-world problems, including multi-speaker text-to-speech synthesis, can greatly benefit from the ability to meta-learn large models with only a few task-specific components.
no code implementations • 31 May 2019 • Soufiane Hayou, Arnaud Doucet, Judith Rousseau
Recent work by Jacot et al. (2018) has shown that training a neural network of any kind with gradient descent in parameter space is strongly related to kernel gradient descent in function space with respect to the Neural Tangent Kernel (NTK).
no code implementations • 23 May 2019 • Maxime Vono, Daniel Paulin, Arnaud Doucet
Performing exact Bayesian inference for complex models is computationally intractable.
6 code implementations • NeurIPS 2019 • Emilien Dupont, Arnaud Doucet, Yee Whye Teh
We show that Neural Ordinary Differential Equations (ODEs) learn representations that preserve the topology of the input space and prove that this implies the existence of functions Neural ODEs cannot represent.
Ranked #21 on
Image Classification
on MNIST
no code implementations • 3 Mar 2019 • Sebastian M. Schmon, Arnaud Doucet, George Deligiannidis
When the weights in a particle filter are not available analytically, standard resampling methods cannot be employed.
no code implementations • 19 Feb 2019 • Soufiane Hayou, Arnaud Doucet, Judith Rousseau
The weight initialization and the activation function of deep neural networks have a crucial impact on the performance of the training procedure.
no code implementations • 5 Feb 2019 • Lawrence Middleton, George Deligiannidis, Arnaud Doucet, Pierre E. Jacob
We consider the approximation of expectations with respect to the distribution of a latent Markov process given noisy measurements.
1 code implementation • 28 Jan 2019 • Robert Cornish, Paul Vanetti, Alexandre Bouchard-Côté, George Deligiannidis, Arnaud Doucet
Bayesian inference via standard Markov Chain Monte Carlo (MCMC) methods is too computationally intensive to handle large datasets, since the cost per step usually scales like $\Theta(n)$ in the number of data points $n$.
4 code implementations • 13 Sep 2018 • Chris J. Maddison, Daniel Paulin, Yee Whye Teh, Brendan O'Donoghue, Arnaud Doucet
Yet, crucially the kinetic gradient map can be designed to incorporate information about the convex conjugate in a fashion that allows for linear convergence on convex functions that may be non-smooth or non-strongly convex.
no code implementations • 25 Jun 2018 • Vladislav Z. B. Tadic, Arnaud Doucet
Using stochastic gradient search and the optimal filter derivative, it is possible to perform recursive (i. e., online) maximum likelihood estimation in a non-linear state-space model.
3 code implementations • NeurIPS 2018 • Anthony L. Caterini, Arnaud Doucet, Dino Sejdinovic
However, for this methodology to be practically efficient, it is necessary to obtain low-variance unbiased estimators of the ELBO and its gradients with respect to the parameters of interest.
no code implementations • ICLR 2019 • Soufiane Hayou, Arnaud Doucet, Judith Rousseau
We complete this analysis by providing quantitative results showing that, for a class of ReLU-like activation functions, the information propagates indeed deeper for an initialization at the edge of chaos.
no code implementations • NeurIPS 2017 • Andrei-Cristian Barbos, Francois Caron, Jean-François Giovannelli, Arnaud Doucet
We propose a generalized Gibbs sampler algorithm for obtaining samples approximately distributed from a high-dimensional Gaussian distribution.
no code implementations • 30 Aug 2017 • Vladislav B. Tadic, Arnaud Doucet
Relying on the same results, the asymptotic behavior of the recursive maximum split-likelihood estimation in hidden Markov models is analyzed, too.
3 code implementations • NeurIPS 2017 • Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, andriy mnih, Arnaud Doucet, Yee Whye Teh
When used as a surrogate objective for maximum likelihood estimation in latent variable models, the evidence lower bound (ELBO) produces state-of-the-art results.
no code implementations • 16 Mar 2017 • Chris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, andriy mnih, Yee Whye Teh
The policy gradients of the expected return objective can react slowly to rare rewards.
4 code implementations • 16 Jan 2017 • Joris Bierkens, Alexandre Bouchard-Côté, Arnaud Doucet, Andrew B. Duncan, Paul Fearnhead, Thibaut Lienart, Gareth Roberts, Sebastian J. Vollmer
Piecewise Deterministic Monte Carlo algorithms enable simulation from a posterior distribution, whilst only needing to access a sub-sample of data at each iteration.
Methodology Computation
no code implementations • 8 Jul 2016 • Johan Alenlöv, Arnaud Doucet, Fredrik Lindsten
When following a Markov chain Monte Carlo (MCMC) approach to approximate the posterior distribution in this context, one typically either uses MCMC schemes which target the joint posterior of the parameters and some auxiliary latent variables, or pseudo-marginal Metropolis--Hastings (MH) schemes.
1 code implementation • 16 Feb 2016 • Tom Rainforth, Christian A. Naesseth, Fredrik Lindsten, Brooks Paige, Jan-Willem van de Meent, Arnaud Doucet, Frank Wood
We introduce interacting particle Markov chain Monte Carlo (iPMCMC), a PMCMC method based on an interacting pool of standard and conditional sequential Monte Carlo samplers.
no code implementations • 9 Feb 2016 • Richard Yi Da Xu, Francois Caron, Arnaud Doucet
We introduce here a class of Bayesian nonparametric models to address this problem.
3 code implementations • 8 Oct 2015 • Alexandre Bouchard-Côté, Sebastian J. Vollmer, Arnaud Doucet
We explore and propose several original extensions of an alternative approach introduced recently in Peters and de With (2012) where the target distribution of interest is explored using a continuous-time Markov process.
Methodology Statistics Theory Statistics Theory
1 code implementation • 29 Sep 2015 • Jeremy Heng, Arnaud Doucet, Yvo Pokern
Any measurable function $T:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}$ such that $Y=T(X)\sim\pi_{1}$ if $X\sim\pi_{0}$ is called a transport map from $\pi_{0}$ to $\pi_{1}$.
Computation
1 code implementation • NeurIPS 2015 • Thibaut Lienart, Yee Whye Teh, Arnaud Doucet
The computational complexity of our algorithm at each iteration is quadratic in the number of particles.
1 code implementation • 11 May 2015 • Rémi Bardenet, Arnaud Doucet, Chris Holmes
Finally, we have only been able so far to propose subsampling-based methods which display good performance in scenarios where the Bernstein-von Mises approximation of the target posterior distribution is excellent.
no code implementations • NeurIPS 2014 • Brooks Paige, Frank Wood, Arnaud Doucet, Yee Whye Teh
We introduce a new sequential Monte Carlo algorithm we call the particle cascade.
2 code implementations • 16 Oct 2013 • Marco Cuturi, Arnaud Doucet
We present new algorithms to compute the mean of a set of empirical probability measures under the optimal transport metric.
no code implementations • NeurIPS 2009 • Francois Caron, Arnaud Doucet
In latent feature models, we associate to each data point a potentially infinite number of binary latent variables indicating the possession of some features and the IBP is a prior distribution on the associated infinite binary matrix.