no code implementations • 5 Sep 2023 • Stefano Di Giovacchino, Desmond J. Higham, Konstantinos Zygalakis
Stochastic optimization methods have been hugely successful in making large-scale optimization problems feasible when computing the full gradient is computationally prohibitive.
no code implementations • 15 May 2023 • Paul Dobson, Jesus Maria Sanz-Serna, Konstantinos Zygalakis
As a result we are able to prove for Polyak's ordinary differential equations and for a two-parameter family of Nesterov algorithms rates of convergence that improve on those available in the literature.
no code implementations • 9 Feb 2023 • Tapio Helin, Andrew Stuart, Aretha Teckentrup, Konstantinos Zygalakis
Bayesian posterior distributions arising in modern applications, including inverse problems in partial differential equation models in tomography and subsurface flow, are often computationally intractable due to the large computational cost of evaluating the data likelihood.
1 code implementation • 10 Sep 2022 • Enrico Crovini, Simon L. Cotter, Konstantinos Zygalakis, Andrew B. Duncan
In this work we reformulate batch BO as an optimisation problem over the space of probability measures.
no code implementations • 27 Aug 2020 • Armin Eftekhari, Konstantinos Zygalakis
In matrix sensing, we first numerically identify the sensitivity to the initialization rank as a new limitation of the implicit bias of gradient flow.
Information Theory Information Theory Optimization and Control
no code implementations • NeurIPS 2016 • Onur Teymur, Konstantinos Zygalakis, Ben Calderhead
We present a derivation and theoretical investigation of the Adams-Bashforth and Adams-Moulton family of linear multistep methods for solving ordinary differential equations, starting from a Gaussian process (GP) framework.
no code implementations • 15 Sep 2016 • Mike Giles, Tigran Nagapetyan, Lukasz Szpruch, Sebastian Vollmer, Konstantinos Zygalakis
In contrast to MCMC methods, Stochastic Gradient MCMC (SGMCMC) algorithms such as the Stochastic Gradient Langevin Dynamics (SGLD) only require access to a batch of the data set at every step.
no code implementations • 4 May 2016 • Michael B. Giles, Mateusz B. Majka, Lukasz Szpruch, Sebastian Vollmer, Konstantinos Zygalakis
We show that this is the first stochastic gradient MCMC method with complexity $\mathcal{O}(\varepsilon^{-2}|\log {\varepsilon}|^{3})$, in contrast to the complexity $\mathcal{O}(\varepsilon^{-3})$ of currently available methods.