Search Results for author: Deqiang Li

Found 7 papers, 5 papers with code

Enhancing Robustness of Deep Neural Networks Against Adversarial Malware Samples: Principles, Framework, and AICS'2019 Challenge

1 code implementation19 Dec 2018 Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu

However, machine learning is known to be vulnerable to adversarial evasion attacks that manipulate a small number of features to make classifiers wrongly recognize a malware sample as a benign one.

Cryptography and Security 68-06

A Framework for Enhancing Deep Neural Networks Against Adversarial Malware

1 code implementation15 Apr 2020 Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu

By conducting experiments with the Drebin Android malware dataset, we show that the framework can achieve a 98. 49\% accuracy (on average) against grey-box attacks, where the attacker knows some information about the defense and the defender knows some information about the attack, and an 89. 14% accuracy (on average) against the more capable white-box attacks, where the attacker knows everything about the defense and the defender knows some information about the attack.

General Classification Malware Detection

Arms Race in Adversarial Malware Detection: A Survey

no code implementations24 May 2020 Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu

In this paper, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties.

Malware Detection

Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection

1 code implementation30 Jun 2020 Deqiang Li, Qianmu Li

This motivates us to investigate which kind of robustness the ensemble defense or effectiveness the ensemble attack can achieve, particularly when they combat with each other.

Ensemble Learning Malware Detection

Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection?

1 code implementation20 Sep 2021 Deqiang Li, Tian Qiu, Shuo Chen, Qianmu Li, Shouhuai Xu

Our main findings are: (i) predictive uncertainty indeed helps achieve reliable malware detection in the presence of dataset shift, but cannot cope with adversarial evasion attacks; (ii) approximate Bayesian methods are promising to calibrate and generalize malware detectors to deal with dataset shift, but cannot cope with adversarial evasion attacks; (iii) adversarial evasion attacks can render calibration methods useless, and it is an open problem to quantify the uncertainty associated with the predicted labels of adversarial examples (i. e., it is not effective to use predictive uncertainty to detect adversarial examples).

Android Malware Detection Malware Detection

PAD: Towards Principled Adversarial Malware Detection Against Evasion Attacks

1 code implementation22 Feb 2023 Deqiang Li, Shicheng Cui, Yun Li, Jia Xu, Fu Xiao, Shouhuai Xu

To promote defense effectiveness, we propose a new mixture of attacks to instantiate PAD to enhance deep neural network-based measurements and malware detectors.

Malware Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.