no code implementations • 7 Feb 2024 • Petr Ostroukhov, Aigerim Zhumabayeva, Chulu Xiang, Alexander Gasnikov, Martin Takáč, Dmitry Kamzolov
To substantiate the efficacy of our method, we experimentally show, how the introduction of adaptive step size and adaptive batch size gradually improves the performance of regular SGD.
1 code implementation • 28 Dec 2023 • Farshed Abdukhakimov, Chulu Xiang, Dmitry Kamzolov, Robert Gower, Martin Takáč
Adaptive optimization methods are widely recognized as among the most popular approaches for training Deep Neural Networks (DNNs).
1 code implementation • 3 Oct 2023 • Farshed Abdukhakimov, Chulu Xiang, Dmitry Kamzolov, Martin Takáč
Stochastic Gradient Descent (SGD) is one of the many iterative optimization methods that are widely used in solving machine learning problems.
1 code implementation • 15 Jul 2022 • Naif Alkhunaizi, Dmitry Kamzolov, Martin Takáč, Karthik Nandakumar
Federated Learning (FL) is a promising solution that enables collaborative training through exchange of model parameters instead of raw data.
no code implementations • 1 Jun 2022 • Abdurakhmon Sadiev, Aleksandr Beznosikov, Abdulla Jasem Almansoori, Dmitry Kamzolov, Rachael Tappenden, Martin Takáč
There are several algorithms for such problems, but existing methods often work poorly when the problem is badly scaled and/or ill-conditioned, and a primary goal of this work is to introduce methods that alleviate this issue.
no code implementations • 16 Feb 2021 • Pavel Dvurechensky, Dmitry Kamzolov, Aleksandr Lukashevich, Soomin Lee, Erik Ordentlich, César A. Uribe, Alexander Gasnikov
Statistical preconditioning enables fast methods for distributed large-scale empirical risk minimization problems.
Distributed Optimization Optimization and Control
no code implementations • 31 Dec 2020 • Artem Agafonov, Dmitry Kamzolov, Pavel Dvurechensky, Alexander Gasnikov
We propose general non-accelerated and accelerated tensor methods under inexact information on the derivatives of the objective, analyze their convergence rate.
Optimization and Control
no code implementations • 11 Dec 2020 • Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov, Dmitry Kamzolov, Innokentiy Shibaev
For this setting, we first present known results for the convergence rates of deterministic first-order methods, which are then followed by a general theoretical analysis of optimal stochastic and randomized gradient schemes, and an overview of the stochastic first-order methods.
2 code implementations • 18 Apr 2020 • Darina Dvinskikh, Dmitry Kamzolov, Alexander Gasnikov, Pavel Dvurechensky, Dmitry Pasechnyk, Vladislav Matykhin, Alexei Chernov
We propose an accelerated meta-algorithm, which allows to obtain accelerated methods for convex unconstrained minimization in different settings.
Optimization and Control