Secure Aggregation with Heterogeneous Quantization in Federated Learning

30 Sep 2020  ·  Ahmed Roushdy Elkordy, A. Salman Avestimehr ·

Secure model aggregation across many users is a key component of federated learning systems. The state-of-the-art protocols for secure model aggregation, which are based on additive masking, require all users to quantize their model updates to the same level of quantization. This severely degrades their performance due to lack of adaptation to available bandwidth at different users. We propose three schemes that allow secure model aggregation while using heterogeneous quantization. This enables the users to adjust their quantization proportional to their available bandwidth, which can provide a substantially better trade-off between the accuracy of training and the communication time. The proposed schemes are based on a grouping strategy by partitioning the network into groups, and partitioning the local model updates of users into segments. Instead of applying aggregation protocol to the entire local model update vector, it is applied on segments with specific coordination between users. We theoretically evaluate the quantization error for our schemes, and also demonstrate how our schemes can be utilized to overcome Byzantine users.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Information Theory Systems and Control Systems and Control Information Theory

Datasets


  Add Datasets introduced or used in this paper