no code implementations • 22 Nov 2023 • Stefano Bruno, Ying Zhang, Dong-Young Lim, Ömer Deniz Akyildiz, Sotirios Sabanis
As a result, we obtain the best known upper bound estimates in terms of key quantities of interest, such as the dimension and rates of convergence, for the Wasserstein-2 distance between the data distribution (Gaussian with unknown mean) and our sampling algorithm.
1 code implementation • 24 Oct 2022 • Dong-Young Lim, Ariel Neufeld, Sotirios Sabanis, Ying Zhang
We introduce a new Langevin dynamics based algorithm, called e-TH$\varepsilon$O POULA, to solve optimization problems with discontinuous stochastic gradients which naturally appear in real-world applications such as quantile estimation, vector quantization, CVaR minimization, and regularized optimization problems involving ReLU neural networks.
1 code implementation • 19 Jul 2021 • Dong-Young Lim, Ariel Neufeld, Sotirios Sabanis, Ying Zhang
To illustrate the applicability of the main results, we consider an example from transfer learning with ReLU neural networks, which represents a key paradigm in machine learning.
no code implementations • 20 Jun 2021 • Dong-Young Lim
The proposed model is able to capture nonlinear relationships in explanatory variables by characterizing the logarithmic mean functions of frequency and severity distributions as neural networks.
1 code implementation • 28 May 2021 • Dong-Young Lim, Sotirios Sabanis
We present a new class of Langevin based algorithms, which overcomes many of the known shortcomings of popular adaptive optimizers that are currently used for the fine tuning of deep learning models.