no code implementations • 15 Nov 2023 • Iosif Lytras, Sotirios Sabanis
In this article we propose a novel taming Langevin-based scheme called $\mathbf{sTULA}$ to sample from distributions with superlinearly growing log-gradient which also satisfy a Log-Sobolev inequality.
no code implementations • 19 Jan 2023 • Tim Johnston, Iosif Lytras, Sotirios Sabanis
In this article we consider sampling from log concave distributions in Hamiltonian setting, without assuming that the objective gradient is globally Lipschitz.
no code implementations • 25 Jun 2020 • Attila Lovas, Iosif Lytras, Miklós Rásonyi, Sotirios Sabanis
We offer a new learning algorithm based on an appropriately constructed variant of the popular stochastic gradient Langevin dynamics (SGLD), which is called tamed unadjusted stochastic Langevin algorithm (TUSLA).