Kick Bad Guys Out! Zero-Knowledge-Proof-Based Anomaly Detection in Federated Learning

Federated Learning (FL) systems are vulnerable to adversarial attacks, where malicious clients submit poisoned models to prevent the global model from converging or plant backdoors to induce the global model to misclassify some samples. Current defense methods fall short in real-world FL systems, as they either rely on impractical prior knowledge or introduce accuracy loss even when no attack happens. Also, these methods do not offer a protocol for verifying the execution, leaving participants doubtful about the correct execution of the mechanism. To address these issues, we propose a novel anomaly detection strategy designed for real-world FL systems. Our approach activates the defense only upon occurrence of attacks, and removes malicious models accurately, without affecting the benign ones. Additionally, our approach incorporates zero-knowledge proofs to ensure the integrity of defense mechanisms. Experimental results demonstrate the effectiveness of our approach in enhancing the security of FL systems against adversarial attacks.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here