FLBoost: On-the-Fly Fine-tuning Boosts Federated Learning via Data-free Distillation

29 Sep 2021  ·  Lin Zhang, Li Shen, Liang Ding, DaCheng Tao, Lingyu Duan ·

Federated Learning (FL) is an emerging distributed learning paradigm for protecting privacy. Data heterogeneity is one of the main challenges in FL, which causes slow convergence and degraded performance. Most existing approaches tackle the heterogeneity challenge by restricting the local model update in client, ignoring the performance drop caused by direct global model aggregation. On the contrary, we propose a new solution: on-the-fly fine-tuning the global model in server via data-free distillation to boost its performance, dubbed FLBoost to relieve the issue of direct model aggregation. Specifically, FLBoost adopts an adversarial distillation scheme to continually transfer the knowledge from local models to fine-tune the global model. In addition, focused distillation and attention-based ensemble techniques are developed to balance the extracted pseudo-knowledge to adapt the data heterogeneity scenario, which implicitly mitigates the distribution discrepancy across clients. Extensive experiments show that our FLBoost can achieve superior performance against the state-of-the-art FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here