Adversarial Examples: Attacks on Machine Learning-based Malware Visualization Detection Methods

5 Aug 2018  ·  Xinbo Liu, Yapin Lin, He Li, Jiliang Zhang ·

As the threat of malicious software (malware) becomes urgently serious, automatic malware detection techniques have received increasing attention recently, where the machine learning (ML)-based visualization detection plays a significant role.However, this leads to a fundamental problem whether such detection methods can be robust enough against various potential attacks.Even though ML algorithms show superiority to conventional ones in malware detection in terms of high efficiency and accuracy, this paper demonstrates that such ML-based malware detection methods are vulnerable to adversarial examples (AE) attacks.We propose the first AE-based attack framework, named Adversarial Texture Malware Perturbation Attacks (ATMPA), based on the gradient descent or L-norm optimization method.By introducing tiny perturbations on the transformed dataset, ML-based malware detection methods completely fail.The experimental results on the MS BIG malware dataset show that a small interference can reduce the detection rate of convolutional neural network (CNN), support vector machine (SVM) and random forest(RF)-based malware detectors down to 0 and the attack transferability can achieve up to 88.7% and 74.1% on average in different ML-based detection methods.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Cryptography and Security

Datasets


  Add Datasets introduced or used in this paper