1 code implementation • 22 Feb 2023 • Deqiang Li, Shicheng Cui, Yun Li, Jia Xu, Fu Xiao, Shouhuai Xu
To promote defense effectiveness, we propose a new mixture of attacks to instantiate PAD to enhance deep neural network-based measurements and malware detectors.
1 code implementation • 20 Sep 2021 • Deqiang Li, Tian Qiu, Shuo Chen, Qianmu Li, Shouhuai Xu
Our main findings are: (i) predictive uncertainty indeed helps achieve reliable malware detection in the presence of dataset shift, but cannot cope with adversarial evasion attacks; (ii) approximate Bayesian methods are promising to calibrate and generalize malware detectors to deal with dataset shift, but cannot cope with adversarial evasion attacks; (iii) adversarial evasion attacks can render calibration methods useless, and it is an open problem to quantify the uncertainty associated with the predicted labels of adversarial examples (i. e., it is not effective to use predictive uncertainty to detect adversarial examples).
1 code implementation • 30 Jun 2020 • Deqiang Li, Qianmu Li
This motivates us to investigate which kind of robustness the ensemble defense or effectiveness the ensemble attack can achieve, particularly when they combat with each other.
no code implementations • 24 May 2020 • Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu
In this paper, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties.
1 code implementation • 15 Apr 2020 • Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu
By conducting experiments with the Drebin Android malware dataset, we show that the framework can achieve a 98. 49\% accuracy (on average) against grey-box attacks, where the attacker knows some information about the defense and the defender knows some information about the attack, and an 89. 14% accuracy (on average) against the more capable white-box attacks, where the attacker knows everything about the defense and the defender knows some information about the attack.
1 code implementation • 19 Dec 2018 • Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu
However, machine learning is known to be vulnerable to adversarial evasion attacks that manipulate a small number of features to make classifiers wrongly recognize a malware sample as a benign one.
Cryptography and Security 68-06
no code implementations • 18 Sep 2018 • Deqiang Li, Ramesh Baral, Tao Li, Han Wang, Qianmu Li, Shouhuai Xu
Adversarial machine learning in the context of image processing and related applications has received a large amount of attention.