no code implementations • 22 Nov 2023 • Stefano Bruno, Ying Zhang, Dong-Young Lim, Ömer Deniz Akyildiz, Sotirios Sabanis
As a result, we obtain the best known upper bound estimates in terms of key quantities of interest, such as the dimension and rates of convergence, for the Wasserstein-2 distance between the data distribution (Gaussian with unknown mean) and our sampling algorithm.
no code implementations • 18 Jul 2023 • Carlos A. C. C. Perello, Ömer Deniz Akyildiz
We introduce a new class of adaptive importance samplers leveraging adaptive optimisation tools, which we term AdaOAIS.
no code implementations • 26 Jan 2023 • Arnaud Vadeboncoeur, Ieva Kazlauskaite, Yanni Papandreou, Fehmi Cirak, Mark Girolami, Ömer Deniz Akyildiz
We introduce a new class of spatially stochastic physics and data informed deep latent models for parametric partial differential equations (PDEs) which operate through scalable variational neural processes.
no code implementations • 9 Aug 2022 • Arnaud Vadeboncoeur, Ömer Deniz Akyildiz, Ieva Kazlauskaite, Mark Girolami, Fehmi Cirak
In the posited probabilistic model, both the forward and inverse maps are approximated as Gaussian distributions with a mean and covariance parameterized by deep neural networks.
no code implementations • 2 Jan 2022 • Ömer Deniz Akyildiz
We analyze the optimized adaptive importance sampler (OAIS) for performing Monte Carlo integration with general proposals.
1 code implementation • 21 Oct 2021 • Ömer Deniz Akyildiz, Connor Duffin, Sotirios Sabanis, Mark Girolami
Through embedding uncertainty inside of the governing equations, finite element solutions are updated to give a posterior distribution which quantifies all sources of uncertainty associated with the model.
1 code implementation • NeurIPS 2020 • Lorenz Richter, Ayman Boustati, Nikolas Nüsken, Francisco J. R. Ruiz, Ömer Deniz Akyildiz
We analyse the properties of an unbiased gradient estimator of the ELBO for variational inference, based on the score function method with leave-one-out control variates.
no code implementations • 23 Feb 2020 • Ayman Boustati, Ömer Deniz Akyildiz, Theodoros Damoulas, Adam M. Johansen
We introduce a framework for inference in general state-space hidden Markov models (HMMs) under likelihood misspecification.
no code implementations • 13 Feb 2020 • Ömer Deniz Akyildiz, Sotirios Sabanis
We provide a nonasymptotic analysis of the convergence of the stochastic gradient Hamiltonian Monte Carlo (SGHMC) to a target measure in Wasserstein-2 distance without assuming log-concavity.
1 code implementation • 9 Oct 2019 • Ömer Deniz Akyildiz, Gerrit J. J. van den Burg, Theodoros Damoulas, Mark F. J. Steel
In particular, we consider nonlinear Gaussian state-space models where sequential approximate inference results in the factorization of a data matrix into a dictionary and time-varying coefficients with potentially nonlinear Markovian dependencies.
Multivariate Time Series Forecasting Multivariate Time Series Imputation +1
no code implementations • 4 Oct 2019 • Ying Zhang, Ömer Deniz Akyildiz, Theodoros Damoulas, Sotirios Sabanis
In this paper, we are concerned with a non-asymptotic analysis of sampling algorithms used in nonconvex optimization.
no code implementations • 28 Mar 2019 • Ömer Deniz Akyildiz, Joaquín Míguez
The non-asymptotic bounds derived in this paper imply that when the target belongs to the exponential family, the $L_2$ errors of the optimised samplers converge to the optimal rate of $\mathcal{O}(1/\sqrt{N})$ and the rate of convergence in the number of iterations are explicitly provided.
no code implementations • 4 Dec 2018 • Ömer Deniz Akyildiz, Émilie Chouzenoux, Víctor Elvira, Joaquín Míguez
In this paper, we propose a probabilistic optimization method, named probabilistic incremental proximal gradient (PIPG) method, by developing a probabilistic interpretation of the incremental proximal gradient algorithm.
no code implementations • 23 Nov 2018 • Ömer Deniz Akyildiz, Dan Crisan, Joaquín Míguez
We introduce and analyze a parallel sequential Monte Carlo methodology for the numerical solution of optimization problems that involve the minimization of a cost function that consists of the sum of many individual components.
no code implementations • 12 Jul 2018 • Ömer Deniz Akyildiz, Victor Elvira, Joaquin Miguez
We then carry out this observation to a general sequential setting: We consider the incremental proximal method, which is an algorithm for large-scale optimization, and show that, for a linear-quadratic cost function, it can naturally be realized by the Kalman filter.
no code implementations • 7 Sep 2015 • Ömer Deniz Akyildiz
Using the probabilistic model, we derive a matrix factorisation algorithm as a recursive linear filter.
no code implementations • 14 Jun 2015 • Ömer Deniz Akyildiz
In this paper, we propose an online algorithm to compute matrix factorizations.