no code implementations • 14 Jul 2023 • Tobias Schröder, Zijing Ou, Yingzhen Li, Andrew B. Duncan
Training energy-based models (EBMs) on discrete spaces is challenging because sampling over such spaces can be difficult.
1 code implementation • NeurIPS 2023 • Tobias Schröder, Zijing Ou, Jen Ning Lim, Yingzhen Li, Sebastian J. Vollmer, Andrew B. Duncan
Energy-based models are a simple yet powerful class of probabilistic models, but their widespread adoption has been limited by the computational burden of training them.
1 code implementation • 28 Apr 2023 • Xing Liu, Andrew B. Duncan, Axel Gandy
Kernelized Stein discrepancy (KSD) is a score-based discrepancy widely used in goodness-of-fit tests.
no code implementations • 11 Feb 2023 • Kevin H. Huang, Xing Liu, Andrew B. Duncan, Axel Gandy
We prove a convergence theorem for U-statistics of degree two, where the data dimension $d$ is allowed to scale with sample size $n$.
1 code implementation • 10 Sep 2022 • Enrico Crovini, Simon L. Cotter, Konstantinos Zygalakis, Andrew B. Duncan
In this work we reformulate batch BO as an optimisation problem over the space of probability measures.
1 code implementation • 9 Jun 2022 • George Wynne, Mikołaj Kasprzak, Andrew B. Duncan
Kernel Stein discrepancy (KSD) is a widely used kernel-based measure of discrepancy between probability measures.
no code implementations • 7 Apr 2021 • Oliver R. A. Dunbar, Andrew B. Duncan, Andrew M. Stuart, Marie-Therese Wolfram
The ensemble Kalman methods are shown to behave favourably in the presence of noise in the parameter-to-data map, whereas Langevin methods are adversely affected.
1 code implementation • 31 Dec 2020 • Chun Yui Wong, Pranay Seshadri, Ashley Scillitoe, Bryn Noel Ubald, Andrew B. Duncan, Geoffrey Parks
In Part I of this two-part paper, a workflow for the formulation of blade envelopes is described and demonstrated.
Computational Engineering, Finance, and Science
1 code implementation • 22 Nov 2020 • Chun Yui Wong, Pranay Seshadri, Ashley Scillitoe, Andrew B. Duncan, Geoffrey Parks
Blades manufactured through flank and point milling will likely exhibit geometric variability.
Dimensionality Reduction Computational Engineering, Finance, and Science Applications
no code implementations • 7 Oct 2020 • Peter Yatsyshin, Serafim Kalliadasis, Andrew B. Duncan
In our case, the output of the learning algorithm is a probability distribution over a family of free energy functionals, consistent with the observed particle data.
no code implementations • 3 Sep 2020 • Jon Cockayne, Andrew B. Duncan
Calibration of large-scale differential equation models to observational or experimental data is a widespread challenge throughout applied sciences and engineering.
1 code implementation • 25 Aug 2020 • George Wynne, Andrew B. Duncan
We propose a nonparametric two-sample test procedure based on Maximum Mean Discrepancy (MMD) for testing the hypothesis that two samples of functions have the same underlying distribution, using kernels defined on function spaces.
no code implementations • 12 Jun 2020 • Shijing Si, Chris. J. Oates, Andrew B. Duncan, Lawrence Carin, François-Xavier Briol
Control variates are a well-established tool to reduce the variance of Monte Carlo estimators.
no code implementations • NeurIPS 2019 • Alessandro Barp, Francois-Xavier Briol, Andrew B. Duncan, Mark Girolami, Lester Mackey
We provide a unifying perspective of these techniques as minimum Stein discrepancy estimators, and use this lens to design new diffusion kernel Stein discrepancy (DKSD) and diffusion score matching (DSM) estimators with complementary strengths.
no code implementations • 13 Jun 2019 • Francois-Xavier Briol, Alessandro Barp, Andrew B. Duncan, Mark Girolami
While likelihood-based inference and its variants provide a statistically efficient and widely applicable approach to parametric inference, their application to models involving intractable likelihoods poses challenges.
4 code implementations • 16 Jan 2017 • Joris Bierkens, Alexandre Bouchard-Côté, Arnaud Doucet, Andrew B. Duncan, Paul Fearnhead, Thibaut Lienart, Gareth Roberts, Sebastian J. Vollmer
Piecewise Deterministic Monte Carlo algorithms enable simulation from a posterior distribution, whilst only needing to access a sub-sample of data at each iteration.
Methodology Computation
no code implementations • 21 Nov 2016 • Jackson Gorham, Andrew B. Duncan, Sebastian J. Vollmer, Lester Mackey
Stein's method for measuring convergence to a continuous target distribution relies on an operator characterizing the target and Stein factor bounds on the solutions of an associated differential equation.