Secure Byzantine-Robust Federated Learning with Dimension-free Error

29 Sep 2021  ·  Lun Wang, Qi Pang, Shuai Wang, Dawn Song ·

In the present work, we propose a federated learning protocol with bi-directional security guarantees. First, our protocol is Byzantine-robust against malicious clients. Additionally, it is the first federated learning protocol with a per-round mean estimation error that is independent of the update size (e.g., the size of the model being trained). Second, our protocol is secure against a semi-honest server, as it only reveals sums of the updates. The code for evaluation is provided in the supplementary material.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here