Search Results for author: Qianmu Li

Found 10 papers, 4 papers with code

Real-centric Consistency Learning for Deepfake Detection

no code implementations15 May 2022 Ruiqi Zha, Zhichao Lian, Qianmu Li, Siqi Gu

Essentially, the target of deepfake detection problem is to represent natural faces and fake faces at the representation space discriminatively, and it reminds us whether we could optimize the feature extraction procedure at the representation space through constraining intra-class consistence and inter-class inconsistence to bring the intra-class representations close and push the inter-class representations apart?

DeepFake Detection Face Swapping +1

Understanding CNNs from excitations

no code implementations2 May 2022 Zijian Ying, Qianmu Li, Zhichao Lian

For instance-level explanation, in order to reveal the relations between high-level semantics and detailed spatial information, this paper proposes a novel cognitive approach to neural networks, which named PANE.

Can We Leverage Predictive Uncertainty to Detect Dataset Shift and Adversarial Examples in Android Malware Detection?

1 code implementation20 Sep 2021 Deqiang Li, Tian Qiu, Shuo Chen, Qianmu Li, Shouhuai Xu

Our main findings are: (i) predictive uncertainty indeed helps achieve reliable malware detection in the presence of dataset shift, but cannot cope with adversarial evasion attacks; (ii) approximate Bayesian methods are promising to calibrate and generalize malware detectors to deal with dataset shift, but cannot cope with adversarial evasion attacks; (iii) adversarial evasion attacks can render calibration methods useless, and it is an open problem to quantify the uncertainty associated with the predicted labels of adversarial examples (i. e., it is not effective to use predictive uncertainty to detect adversarial examples).

Android Malware Detection Malware Detection

Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection

1 code implementation30 Jun 2020 Deqiang Li, Qianmu Li

This motivates us to investigate which kind of robustness the ensemble defense or effectiveness the ensemble attack can achieve, particularly when they combat with each other.

Ensemble Learning Malware Detection

Arms Race in Adversarial Malware Detection: A Survey

no code implementations24 May 2020 Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu

In this paper, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties.

Malware Detection

A Framework for Enhancing Deep Neural Networks Against Adversarial Malware

1 code implementation15 Apr 2020 Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu

By conducting experiments with the Drebin Android malware dataset, we show that the framework can achieve a 98. 49\% accuracy (on average) against grey-box attacks, where the attacker knows some information about the defense and the defender knows some information about the attack, and an 89. 14% accuracy (on average) against the more capable white-box attacks, where the attacker knows everything about the defense and the defender knows some information about the attack.

General Classification Malware Detection

An Overview of Two Age Synthesis and Estimation Techniques

no code implementations26 Jan 2020 Milad Taleby Ahvanooey, Qianmu Li

Age estimation is defined to label a facial image automatically with the age group (year range) or the exact age (year) of the person's face.

Age Estimation Face Verification

Time-Aware Gated Recurrent Unit Networks for Road Surface Friction Prediction Using Historical Data

no code implementations1 Nov 2019 Ziyuan Pu, Zhiyong Cui, Shuo Wang, Qianmu Li, Yinhai Wang

The findings can help improve the prediction accuracy and efficiency of forecasting road surface friction using historical data sets with missing values, therefore mitigating the impact of wet or icy road conditions on traffic safety.

Friction

Enhancing Robustness of Deep Neural Networks Against Adversarial Malware Samples: Principles, Framework, and AICS'2019 Challenge

1 code implementation19 Dec 2018 Deqiang Li, Qianmu Li, Yanfang Ye, Shouhuai Xu

However, machine learning is known to be vulnerable to adversarial evasion attacks that manipulate a small number of features to make classifiers wrongly recognize a malware sample as a benign one.

Cryptography and Security 68-06

Cannot find the paper you are looking for? You can Submit a new open access paper.