no code implementations • 17 Jan 2024 • Thomas Trigo Trindade, Konstantinos C. Zygalakis
We consider the problem of efficiently simulating stochastic models of chemical kinetics.
1 code implementation • 18 Aug 2023 • Teresa Klatzer, Paul Dobson, Yoann Altmann, Marcelo Pereyra, Jesús María Sanz-Serna, Konstantinos C. Zygalakis
This discretisation is asymptotically unbiased for Gaussian targets and shown to converge in an accelerated manner for any target that is $\kappa$-strongly log-concave (i. e., requiring in the order of $\sqrt{\kappa}$ iterations to converge, similarly to accelerated optimisation schemes), comparing favorably to [M. Pereyra, L. Vargas Mieles, K. C.
no code implementations • 17 Jul 2023 • Tianming Bai, Aretha L. Teckentrup, Konstantinos C. Zygalakis
This work is concerned with the use of Gaussian surrogate models for Bayesian inverse problems associated with linear partial differential equations.
1 code implementation • 28 Jun 2022 • Marcelo Pereyra, Luis A. Vargas-Mieles, Konstantinos C. Zygalakis
Additionally, instead of viewing the augmented posterior distribution as an approximation of the original model, we propose to consider it as a generalisation of this model.
no code implementations • 26 Apr 2021 • J. M. Sanz-Serna, Konstantinos C. Zygalakis
We present a framework that allows for the non-asymptotic study of the $2$-Wasserstein distance between the invariant distribution of an ergodic stochastic differential equation and the distribution of its numerical approximation in the strongly log-concave case.
no code implementations • 18 Mar 2021 • Matthew Holden, Marcelo Pereyra, Konstantinos C. Zygalakis
Bayesian computation is performed by using a parallel tempered version of the preconditioned Crank-Nicolson algorithm on the manifold, which is shown to be ergodic and robust to the non-convex nature of these data-driven models.
no code implementations • 23 Sep 2020 • Oliver M. Crook, Mihai Cucuringu, Tim Hurst, Carola-Bibiane Schönlieb, Matthew Thorpe, Konstantinos C. Zygalakis
The transportation $\mathrm{L}^p$ distance, denoted $\mathrm{TL}^p$, has been proposed as a generalisation of Wasserstein $\mathrm{W}^p$ distances motivated by the property that it can be applied directly to colour or multi-channelled images, as well as multivariate time-series without normalisation or mass constraints.
no code implementations • 1 Sep 2020 • J. M. Sanz-Serna, Konstantinos C. Zygalakis
In the appropriate limit, this family of methods may be seen as a discretization of a family of second-order ordinary differential equations for which we construct(continuous) Lyapunov functions by means of the LMI framework.
no code implementations • 11 Nov 2019 • Konstantin Rusch, John W. Pearson, Konstantinos C. Zygalakis
The key benefit of this approach is that the corresponding RNN inherits the favorable long time properties of the Hamiltonian system, which in turn allows us to control the hidden states gradient with a hyperparameter of the Hamiltonian RNN architecture.
no code implementations • 23 Sep 2019 • Oliver M. Crook, Tim Hurst, Carola-Bibiane Schönlieb, Matthew Thorpe, Konstantinos C. Zygalakis
In this paper we extend the labels by minimising the constrained discrete $p$-Dirichlet energy.
no code implementations • 26 Mar 2017 • Andrea L. Bertozzi, Xiyang Luo, Andrew M. Stuart, Konstantinos C. Zygalakis
In this paper we introduce, develop algorithms for, and investigate the properties of, a variety of Bayesian models for the task of binary classification; via the posterior distribution on the classification labels, these methods automatically give measures of uncertainty.
no code implementations • 2 Jan 2015 • Sebastian J. Vollmer, Konstantinos C. Zygalakis, and Yee Whye Teh
For this toy model we study the gain of the SGLD over the standard Euler method in the limit of large data sets.