Search Results for author: Konstantinos Zygalakis

Found 8 papers, 1 papers with code

Backward error analysis and the qualitative behaviour of stochastic optimization algorithms: Application to stochastic coordinate descent

no code implementations5 Sep 2023 Stefano Di Giovacchino, Desmond J. Higham, Konstantinos Zygalakis

Stochastic optimization methods have been hugely successful in making large-scale optimization problems feasible when computing the full gradient is computationally prohibitive.

Stochastic Optimization

On the connections between optimization algorithms, Lyapunov functions, and differential equations: theory and insights

no code implementations15 May 2023 Paul Dobson, Jesus Maria Sanz-Serna, Konstantinos Zygalakis

As a result we are able to prove for Polyak's ordinary differential equations and for a two-parameter family of Nesterov algorithms rates of convergence that improve on those available in the literature.

Introduction To Gaussian Process Regression In Bayesian Inverse Problems, With New ResultsOn Experimental Design For Weighted Error Measures

no code implementations9 Feb 2023 Tapio Helin, Andrew Stuart, Aretha Teckentrup, Konstantinos Zygalakis

Bayesian posterior distributions arising in modern applications, including inverse problems in partial differential equation models in tomography and subsurface flow, are often computationally intractable due to the large computational cost of evaluating the data likelihood.

Experimental Design regression

Batch Bayesian Optimization via Particle Gradient Flows

1 code implementation10 Sep 2022 Enrico Crovini, Simon L. Cotter, Konstantinos Zygalakis, Andrew B. Duncan

In this work we reformulate batch BO as an optimisation problem over the space of probability measures.

Bayesian Inference

Limitations of Implicit Bias in Matrix Sensing: Initialization Rank Matters

no code implementations27 Aug 2020 Armin Eftekhari, Konstantinos Zygalakis

In matrix sensing, we first numerically identify the sensitivity to the initialization rank as a new limitation of the implicit bias of gradient flow.

Information Theory Information Theory Optimization and Control

Probabilistic Linear Multistep Methods

no code implementations NeurIPS 2016 Onur Teymur, Konstantinos Zygalakis, Ben Calderhead

We present a derivation and theoretical investigation of the Adams-Bashforth and Adams-Moulton family of linear multistep methods for solving ordinary differential equations, starting from a Gaussian process (GP) framework.

Multilevel Monte Carlo for Scalable Bayesian Computations

no code implementations15 Sep 2016 Mike Giles, Tigran Nagapetyan, Lukasz Szpruch, Sebastian Vollmer, Konstantinos Zygalakis

In contrast to MCMC methods, Stochastic Gradient MCMC (SGMCMC) algorithms such as the Stochastic Gradient Langevin Dynamics (SGLD) only require access to a batch of the data set at every step.

Multilevel Monte Carlo methods for the approximation of invariant measures of stochastic differential equations

no code implementations4 May 2016 Michael B. Giles, Mateusz B. Majka, Lukasz Szpruch, Sebastian Vollmer, Konstantinos Zygalakis

We show that this is the first stochastic gradient MCMC method with complexity $\mathcal{O}(\varepsilon^{-2}|\log {\varepsilon}|^{3})$, in contrast to the complexity $\mathcal{O}(\varepsilon^{-3})$ of currently available methods.

Cannot find the paper you are looking for? You can Submit a new open access paper.