We show in experiments that our method can attack a gradient-boosted machine learning model with evasion rates that are substantial and appear to be strongly dependent on the dataset.
First, we introduce neuron coverage for systematically measuring the parts of a DL system exercised by test inputs.
This paper proposes a generative adversarial network (GAN) based algorithm named MalGAN to generate adversarial malware examples, which are able to bypass black-box machine learning based detection models.
The problem of cross-platform binary code similarity detection aims at detecting whether two binary functions coming from different platforms are similar or not.
Our approach can check different safety properties and find concrete counterexamples for networks that are 10$\times$ larger than the ones supported by existing analysis techniques.
However, deep learning is often criticized for its lack of robustness in adversarial settings (e. g., vulnerability to adversarial inputs) and general inability to rationalize its predictions.
These models target the core of the malicious operation by learning the presence and pattern of co-occurrence of malicious event actions from within these sequences.
While conventional signature and token based methods for malware detection do not detect a majority of new variants for existing malware, the results presented in this paper show that signatures generated by the DBN allow for an accurate classification of new malware variants.
In this paper, we consider the problem of malware detection and classification based on image analysis.
This provides an incentive for adversaries to steal these novel architectures; when used in the cloud, to provide Machine Learning as a Service, the adversaries also have an opportunity to reconstruct the architectures by exploiting a range of hardware side channels.