Toward Smart Security Enhancement of Federated Learning Networks

19 Aug 2020  ·  Junjie Tan, Ying-Chang Liang, Nguyen Cong Luong, Dusit Niyato ·

As traditional centralized learning networks (CLNs) are facing increasing challenges in terms of privacy preservation, communication overheads, and scalability, federated learning networks (FLNs) have been proposed as a promising alternative paradigm to support the training of machine learning (ML) models. In contrast to the centralized data storage and processing in CLNs, FLNs exploit a number of edge devices (EDs) to store data and perform training distributively. In this way, the EDs in FLNs can keep training data locally, which preserves privacy and reduces communication overheads. However, since the model training within FLNs relies on the contribution of all EDs, the training process can be disrupted if some of the EDs upload incorrect or falsified training results, i.e., poisoning attacks. In this paper, we review the vulnerabilities of FLNs, and particularly give an overview of poisoning attacks and mainstream countermeasures. Nevertheless, the existing countermeasures can only provide passive protection and fail to consider the training fees paid for the contributions of the EDs, resulting in a unnecessarily high training cost. Hence, we present a smart security enhancement framework for FLNs. In particular, a verify-before-aggregate (VBA) procedure is developed to identify and remove the non-benign training results from the EDs. Afterward, deep reinforcement learning (DRL) is applied to learn the behaving patterns of the EDs and to actively select the EDs that can provide benign training results and charge low training fees. Simulation results reveal that the proposed framework can protect FLNs effectively and efficiently.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here