FEDERATED LEARNING FRAMEWORK BASED ON TRIMMED MEAN AGGREGATION RULES

29 Sep 2021  ·  Wang Tian Xiang, Meiyue Shao, Yanwei Fu, Riheng Jia, Feilong Lin, ZhongLong Zheng ·

This paper studies the problem of information security in the distributed learning framework. In particular, we consider the clients will always be attacked by Byzantine nodes and poisoning in the federated learning. Typically, aggregation rules are utilized to protect the model from the attacks in federated learning. The classical aggregation methods are Krum(·) and Mean(·), which however, are not capable enough to deal with Byzantine attacks in which general deviations and multiple clients are attacked at the same time. We propose new aggregation rules, Tmean(·), to the federated learning algorithm, and propose a federated learning framework based on Byzantine-resilient aggregation algorithm. Our novel Tmean(·) rules are derived from Mean(·) by appropriately trimming some of the values before averaging them. Theoretically, we provide rigorous theoretical proof and understanding of Tmean(·). Extensive experiments validate the effectiveness of our approaches.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here