Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems

5 Jul 2021  ·  Ron Bitton, Nadav Maman, Inderjeet Singh, Satoru Momiyama, Yuval Elovici, Asaf Shabtai ·

Although cyberattacks on machine learning (ML) production systems can be harmful, today, security practitioners are ill equipped, lacking methodologies and tactical tools that would allow them to analyze the security risks of their ML-based systems. In this paper, we performed a comprehensive threat analysis of ML production systems. In this analysis, we follow the ontology presented by NIST for evaluating enterprise network security risk and apply it to ML-based production systems. Specifically, we (1) enumerate the assets of a typical ML production system, (2) describe the threat model (i.e., potential adversaries, their capabilities, and their main goal), (3) identify the various threats to ML systems, and (4) review a large number of attacks, demonstrated in previous studies, which can realize these threats. In addition, to quantify the risk of adversarial machine learning (AML) threat, we introduce a novel scoring system, which assign a severity score to different AML attacks. The proposed scoring system utilizes the analytic hierarchy process (AHP) for ranking, with the assistance of security experts, various attributes of the attacks. Finally, we developed an extension to the MulVAL attack graph generation and analysis framework to incorporate cyberattacks on ML production systems. Using the extension, security practitioners can apply attack graph analysis methods in environments that include ML components; thus, providing security practitioners with a methodological and practical tool for evaluating the impact and quantifying the risk of a cyberattack targeting an ML production system.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here