Search Results for author: Egor Shulgin

Found 9 papers, 3 papers with code

MAST: Model-Agnostic Sparsified Training

1 code implementation27 Nov 2023 Yury Demidovich, Grigory Malinovsky, Egor Shulgin, Peter Richtárik

We introduce a novel optimization problem formulation that departs from the conventional way of minimizing machine learning model loss as a black-box function.

Towards a Better Theoretical Understanding of Independent Subnetwork Training

no code implementations28 Jun 2023 Egor Shulgin, Peter Richtárik

We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication, and provide a precise analysis of its optimization performance on a quadratic model.

Distributed Computing

Shifted Compression Framework: Generalizations and Improvements

no code implementations21 Jun 2022 Egor Shulgin, Peter Richtárik

Communication is one of the key bottlenecks in the distributed training of large-scale machine learning models, and lossy compression of exchanged information, such as stochastic gradients or models, is one of the most effective instruments to alleviate this issue.

Certified Robustness in Federated Learning

1 code implementation6 Jun 2022 Motasem Alfarra, Juan C. Pérez, Egor Shulgin, Peter Richtárik, Bernard Ghanem

However, as in the single-node supervised learning setup, models trained in federated learning suffer from vulnerability to imperceptible input transformations known as adversarial attacks, questioning their deployment in security-related applications.

Federated Learning

ADOM: Accelerated Decentralized Optimization Method for Time-Varying Networks

no code implementations18 Feb 2021 Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Alexander Rogozin, Alexander Gasnikov

We propose ADOM - an accelerated method for smooth and strongly convex decentralized optimization over time-varying networks.

Uncertainty Principle for Communication Compression in Distributed and Federated Learning and the Search for an Optimal Compressor

no code implementations20 Feb 2020 Mher Safaryan, Egor Shulgin, Peter Richtárik

In designing a compression method, one aims to communicate as few bits as possible, which minimizes the cost per communication round, while at the same time attempting to impart as little distortion (variance) to the communicated messages as possible, which minimizes the adverse effect of the compression on the overall number of communication rounds.

Federated Learning Quantization

Adaptive Catalyst for Smooth Convex Optimization

1 code implementation25 Nov 2019 Anastasiya Ivanova, Dmitry Pasechnyuk, Dmitry Grishchenko, Egor Shulgin, Alexander Gasnikov, Vladislav Matyukhin

In this paper, we present a generic framework that allows accelerating almost arbitrary non-accelerated deterministic and randomized algorithms for smooth convex optimization problems.

Optimization and Control

Revisiting Stochastic Extragradient

no code implementations27 May 2019 Konstantin Mishchenko, Dmitry Kovalev, Egor Shulgin, Peter Richtárik, Yura Malitsky

We fix a fundamental issue in the stochastic extragradient method by providing a new sampling strategy that is motivated by approximating implicit updates.

SGD: General Analysis and Improved Rates

no code implementations27 Jan 2019 Robert Mansel Gower, Nicolas Loizou, Xun Qian, Alibek Sailanbayev, Egor Shulgin, Peter Richtarik

By specializing our theorem to different mini-batching strategies, such as sampling with replacement and independent sampling, we derive exact expressions for the stepsize as a function of the mini-batch size.

Cannot find the paper you are looking for? You can Submit a new open access paper.