EvilModel 2.0: Bringing Neural Network Models into Malware Attacks

9 Sep 2021  ·  Zhi Wang, Chaoge Liu, Xiang Cui, Jie Yin, Xutong Wang ·

Security issues have gradually emerged with the continuous development of artificial intelligence (AI). Earlier work verified the possibility of converting neural network models into stegomalware, embedding malware into a model with limited impact on the model's performance. However, existing methods are not applicable in real-world attack scenarios and do not attract enough attention from the security community due to performance degradation and additional workload. Therefore, we propose an improved stegomalware EvilModel. By analyzing the composition of the neural network model, three new methods for embedding malware into the model are proposed: MSB reservation, fast substitution, and half substitution, which can embed malware that accounts for half of the model's volume without affecting the model's performance. We built 550 EvilModels using ten mainstream neural network models and 19 malware samples. The experiment shows that EvilModel achieved an embedding rate of 48.52\%. A quantitative algorithm is proposed to evaluate the existing embedding methods. We also design a trigger and propose a threat scenario for the targeted attack. The practicality and effectiveness of the proposed methods were demonstrated by experiments and analyses of the embedding capacity, performance impact, and detection evasion.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here