Detection of Insider Attacks in Distributed Projected Subgradient Algorithms

18 Jan 2021  ·  Sissi Xiaoxiao Wu, Gangqiang Li, Shengli Zhang, Xiaohui Lin ·

The gossip-based distributed algorithms are widely used to solve decentralized optimization problems in various multi-agent applications, while they are generally vulnerable to data injection attacks by internal malicious agents as each agent locally estimates its decent direction without an authorized supervision. In this work, we explore the application of artificial intelligence (AI) technologies to detect internal attacks. We show that a general neural network is particularly suitable for detecting and localizing the malicious agents, as they can effectively explore nonlinear relationship underlying the collected data. Moreover, we propose to adopt one of the state-of-art approaches in federated learning, i.e., a collaborative peer-to-peer machine learning protocol, to facilitate training our neural network models by gossip exchanges. This advanced approach is expected to make our model more robust to challenges with insufficient training data, or mismatched test data. In our simulations, a least-squared problem is considered to verify the feasibility and effectiveness of AI-based methods. Simulation results demonstrate that the proposed AI-based methods are beneficial to improve performance of detecting and localizing malicious agents over score-based methods, and the peer-to-peer neural network model is indeed robust to target issues.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here