Hierarchical Federated Learning with Quantization: Convergence Analysis and System Design

26 Mar 2021  ·  Lumin Liu, Jun Zhang, Shenghui Song, Khaled B. Letaief ·

Federated learning (FL) is a powerful distributed machine learning framework where a server aggregates models trained by different clients without accessing their private data. Hierarchical FL, with a client-edge-cloud aggregation hierarchy, can effectively leverage both the cloud server's access to many clients' data and the edge servers' closeness to the clients to achieve a high communication efficiency. Neural network quantization can further reduce the communication overhead during model uploading. To fully exploit the advantages of hierarchical FL, an accurate convergence analysis with respect to the key system parameters is needed. Unfortunately, existing analysis is loose and does not consider model quantization. In this paper, we derive a tighter convergence bound for hierarchical FL with quantization. The convergence result leads to practical guidelines for important design problems such as the client-edge aggregation and edge-client association strategies. Based on the obtained analytical results, we optimize the two aggregation intervals and show that the client-edge aggregation interval should slowly decay while the edge-cloud aggregation interval needs to adapt to the ratio of the client-edge and edge-cloud propagation delay. Simulation results shall verify the design guidelines and demonstrate the effectiveness of the proposed aggregation strategy.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here