Search Results for author: Deqiang Li

Found 6 papers, 4 papers with code

Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection?

1 code implementation20 Sep 2021 Deqiang Li, Tian Qiu, Shuo Chen, Qianmu Li, Shouhuai Xu

Our main findings are: (i) predictive uncertainty indeed helps achieve reliable malware detection in the presence of dataset shift, but cannot cope with adversarial evasion attacks; (ii) approximate Bayesian methods are promising to calibrate and generalize malware detectors to deal with dataset shift, but cannot cope with adversarial evasion attacks; (iii) adversarial evasion attacks can render calibration methods useless, and it is an open problem to quantify the uncertainty associated with the predicted labels of adversarial examples (i. e., it is not effective to use predictive uncertainty to detect adversarial examples).

Android Malware Detection Malware Detection

Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection

1 code implementation30 Jun 2020 Deqiang Li, Qianmu Li

This motivates us to investigate which kind of robustness the ensemble defense or effectiveness the ensemble attack can achieve, particularly when they combat with each other.

Ensemble Learning Malware Detection

Arms Race in Adversarial Malware Detection: A Survey

no code implementations24 May 2020 Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu

In this paper, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties.

Malware Detection

A Framework for Enhancing Deep Neural Networks Against Adversarial Malware

1 code implementation15 Apr 2020 Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu

By conducting experiments with the Drebin Android malware dataset, we show that the framework can achieve a 98. 49\% accuracy (on average) against grey-box attacks, where the attacker knows some information about the defense and the defender knows some information about the attack, and an 89. 14% accuracy (on average) against the more capable white-box attacks, where the attacker knows everything about the defense and the defender knows some information about the attack.

General Classification Malware Detection

Enhancing Robustness of Deep Neural Networks Against Adversarial Malware Samples: Principles, Framework, and AICS'2019 Challenge

1 code implementation19 Dec 2018 Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu

However, machine learning is known to be vulnerable to adversarial evasion attacks that manipulate a small number of features to make classifiers wrongly recognize a malware sample as a benign one.

Cryptography and Security 68-06

Cannot find the paper you are looking for? You can Submit a new open access paper.