Search Results for author: Dmitry Kamzolov

Found 9 papers, 4 papers with code

AdaBatchGrad: Combining Adaptive Batch Size and Adaptive Step Size

no code implementations7 Feb 2024 Petr Ostroukhov, Aigerim Zhumabayeva, Chulu Xiang, Alexander Gasnikov, Martin Takáč, Dmitry Kamzolov

To substantiate the efficacy of our method, we experimentally show, how the introduction of adaptive step size and adaptive batch size gradually improves the performance of regular SGD.

SANIA: Polyak-type Optimization Framework Leads to Scale Invariant Stochastic Algorithms

1 code implementation28 Dec 2023 Farshed Abdukhakimov, Chulu Xiang, Dmitry Kamzolov, Robert Gower, Martin Takáč

Adaptive optimization methods are widely recognized as among the most popular approaches for training Deep Neural Networks (DNNs).

Stochastic Gradient Descent with Preconditioned Polyak Step-size

1 code implementation3 Oct 2023 Farshed Abdukhakimov, Chulu Xiang, Dmitry Kamzolov, Martin Takáč

Stochastic Gradient Descent (SGD) is one of the many iterative optimization methods that are widely used in solving machine learning problems.

Suppressing Poisoning Attacks on Federated Learning for Medical Imaging

1 code implementation15 Jul 2022 Naif Alkhunaizi, Dmitry Kamzolov, Martin Takáč, Karthik Nandakumar

Federated Learning (FL) is a promising solution that enables collaborative training through exchange of model parameters instead of raw data.

Federated Learning Outlier Detection

Stochastic Gradient Methods with Preconditioned Updates

no code implementations1 Jun 2022 Abdurakhmon Sadiev, Aleksandr Beznosikov, Abdulla Jasem Almansoori, Dmitry Kamzolov, Rachael Tappenden, Martin Takáč

There are several algorithms for such problems, but existing methods often work poorly when the problem is badly scaled and/or ill-conditioned, and a primary goal of this work is to introduce methods that alleviate this issue.

Inexact Tensor Methods and Their Application to Stochastic Convex Optimization

no code implementations31 Dec 2020 Artem Agafonov, Dmitry Kamzolov, Pavel Dvurechensky, Alexander Gasnikov

We propose general non-accelerated and accelerated tensor methods under inexact information on the derivatives of the objective, analyze their convergence rate.

Optimization and Control

Recent Theoretical Advances in Non-Convex Optimization

no code implementations11 Dec 2020 Marina Danilova, Pavel Dvurechensky, Alexander Gasnikov, Eduard Gorbunov, Sergey Guminov, Dmitry Kamzolov, Innokentiy Shibaev

For this setting, we first present known results for the convergence rates of deterministic first-order methods, which are then followed by a general theoretical analysis of optimal stochastic and randomized gradient schemes, and an overview of the stochastic first-order methods.

Accelerated meta-algorithm for convex optimization

2 code implementations18 Apr 2020 Darina Dvinskikh, Dmitry Kamzolov, Alexander Gasnikov, Pavel Dvurechensky, Dmitry Pasechnyk, Vladislav Matykhin, Alexei Chernov

We propose an accelerated meta-algorithm, which allows to obtain accelerated methods for convex unconstrained minimization in different settings.

Optimization and Control

Cannot find the paper you are looking for? You can Submit a new open access paper.