How to Attain Communication-Efficient DNN Training? Convert, Compress, Correct

18 Apr 2022  ·  Zhong-Jing Chen, Eduin E. Hernandez, Yu-Chih Huang, Stefano Rini ·

This paper introduces CO3 -- an algorithm for communication-efficient federated Deep Neural Network (DNN) training. CO3 takes its name from three processing applied which reduce the communication load when transmitting the local DNN gradients from the remote users to the Parameter Server. Namely: (i) gradient quantization through floating-point conversion, (ii) lossless compression of the quantized gradient, and (iii) quantization error correction. We carefully design each of the steps above to assure good training performance under a constraint on the communication rate. In particular, in steps (i) and (ii), we adopt the assumption that DNN gradients are distributed according to a generalized normal distribution, which is validated numerically in the paper. For step (iii), we utilize an error feedback with memory decay mechanism to correct the quantization error introduced in step (i). We argue that the memory decay coefficient, similarly to the learning rate, can be optimally tuned to improve convergence. A rigorous convergence analysis of the proposed CO3 with SGD is provided. Moreover, with extensive simulations, we show that CO3 offers improved performance when compared with existing gradient compression schemes in the literature which employ sketching and non-uniform quantization of the local gradients.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here