Distributed and Stochastic Optimization Methods with Gradient Compression and Local Steps

20 Dec 2021  ·  Eduard Gorbunov ·

In this thesis, we propose new theoretical frameworks for the analysis of stochastic and distributed methods with error compensation and local updates. Using these frameworks, we develop more than 20 new optimization methods, including the first linearly converging Error-Compensated SGD and the first linearly converging Local-SGD for arbitrarily heterogeneous local functions. Moreover, the thesis contains several new distributed methods with unbiased compression for distributed non-convex optimization problems. The derived complexity results for these methods outperform the previous best-known results for the considered problems. Finally, we propose a new scalable decentralized fault-tolerant distributed method, and under reasonable assumptions, we derive the iteration complexity bounds for this method that match the ones of centralized Local-SGD.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods