Search Results for author: Konstantinos C. Zygalakis

Found 12 papers, 2 papers with code

Accelerated Bayesian imaging by relaxed proximal-point Langevin sampling

1 code implementation18 Aug 2023 Teresa Klatzer, Paul Dobson, Yoann Altmann, Marcelo Pereyra, Jesús María Sanz-Serna, Konstantinos C. Zygalakis

This discretisation is asymptotically unbiased for Gaussian targets and shown to converge in an accelerated manner for any target that is $\kappa$-strongly log-concave (i. e., requiring in the order of $\sqrt{\kappa}$ iterations to converge, similarly to accelerated optimisation schemes), comparing favorably to [M. Pereyra, L. Vargas Mieles, K. C.

Bayesian Inference Image Deconvolution

Gaussian processes for Bayesian inverse problems associated with linear partial differential equations

no code implementations17 Jul 2023 Tianming Bai, Aretha L. Teckentrup, Konstantinos C. Zygalakis

This work is concerned with the use of Gaussian surrogate models for Bayesian inverse problems associated with linear partial differential equations.

Gaussian Processes

The split Gibbs sampler revisited: improvements to its algorithmic structure and augmented target distribution

1 code implementation28 Jun 2022 Marcelo Pereyra, Luis A. Vargas-Mieles, Konstantinos C. Zygalakis

Additionally, instead of viewing the augmented posterior distribution as an approximation of the original model, we propose to consider it as a generalisation of this model.

Data Augmentation Deblurring +2

Wasserstein distance estimates for the distributions of numerical approximations to ergodic stochastic differential equations

no code implementations26 Apr 2021 J. M. Sanz-Serna, Konstantinos C. Zygalakis

We present a framework that allows for the non-asymptotic study of the $2$-Wasserstein distance between the invariant distribution of an ergodic stochastic differential equation and the distribution of its numerical approximation in the strongly log-concave case.

Bayesian Imaging With Data-Driven Priors Encoded by Neural Networks: Theory, Methods, and Algorithms

no code implementations18 Mar 2021 Matthew Holden, Marcelo Pereyra, Konstantinos C. Zygalakis

Bayesian computation is performed by using a parallel tempered version of the preconditioned Crank-Nicolson algorithm on the manifold, which is shown to be ergodic and robust to the non-convex nature of these data-driven models.

Bayesian Inference Generative Adversarial Network +2

A Linear Transportation $\mathrm{L}^p$ Distance for Pattern Recognition

no code implementations23 Sep 2020 Oliver M. Crook, Mihai Cucuringu, Tim Hurst, Carola-Bibiane Schönlieb, Matthew Thorpe, Konstantinos C. Zygalakis

The transportation $\mathrm{L}^p$ distance, denoted $\mathrm{TL}^p$, has been proposed as a generalisation of Wasserstein $\mathrm{W}^p$ distances motivated by the property that it can be applied directly to colour or multi-channelled images, as well as multivariate time-series without normalisation or mass constraints.

Time Series Time Series Analysis

The connections between Lyapunov functions for some optimization algorithms and differential equations

no code implementations1 Sep 2020 J. M. Sanz-Serna, Konstantinos C. Zygalakis

In the appropriate limit, this family of methods may be seen as a discretization of a family of second-order ordinary differential equations for which we construct(continuous) Lyapunov functions by means of the LMI framework.

Constructing Gradient Controllable Recurrent Neural Networks Using Hamiltonian Dynamics

no code implementations11 Nov 2019 Konstantin Rusch, John W. Pearson, Konstantinos C. Zygalakis

The key benefit of this approach is that the corresponding RNN inherits the favorable long time properties of the Hamiltonian system, which in turn allows us to control the hidden states gradient with a hyperparameter of the Hamiltonian RNN architecture.

Hyperparameter Optimization

Uncertainty quantification in graph-based classification of high dimensional data

no code implementations26 Mar 2017 Andrea L. Bertozzi, Xiyang Luo, Andrew M. Stuart, Konstantinos C. Zygalakis

In this paper we introduce, develop algorithms for, and investigate the properties of, a variety of Bayesian models for the task of binary classification; via the posterior distribution on the classification labels, these methods automatically give measures of uncertainty.

Binary Classification Classification +3

(Non-) asymptotic properties of Stochastic Gradient Langevin Dynamics

no code implementations2 Jan 2015 Sebastian J. Vollmer, Konstantinos C. Zygalakis, and Yee Whye Teh

For this toy model we study the gain of the SGLD over the standard Euler method in the limit of large data sets.

Cannot find the paper you are looking for? You can Submit a new open access paper.