Convergence Theory of Generalized Distributed Subgradient Method with Random Quantization

22 Jul 2022  ·  Zhaoyue Xia, Jun Du, Yong Ren ·

The distributed subgradient method (DSG) is a widely discussed algorithm to cope with large-scale distributed optimization problems in the arising machine learning applications. Most exisiting works on DSG focus on ideal communication between the cooperative agents such that the shared information between agents is exact and perfect. This assumption, however, could lead to potential privacy concerns and is not feasible when the wireless transmission links are not of good quality. To overcome the challenge, a common approach is to quantize the data locally before transmission, which avoids exposure of raw data and significantly reduces the size of data. Compared with perfect data, quantization poses fundamental challenges on loss of data accuracy, which further impacts the convergence of the algorithms. To settle the problem, we propose a generalized distributed subgradient method with random quantization, which can be intepreted as a two time-scale stochastic approximation method. We provide comprehensive results on the convergence of the algorithm and derive upper bounds on the convergence rates in terms of the quantization bit, stepsizes and the number of network agents. Our results extend the existing results, where only special cases are considered and general conclusions for the convergence rates are missing. Finally, numerical simulations are conducted on linear regression problems to support our theoretical results.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods