Search Results for author: Minhui Xue

Found 15 papers, 2 papers with code

Explaining and Measuring Functionalities of Malware Detectors

no code implementations19 Nov 2021 Wei Wang, Ruoxi Sun, Tian Dong, Shaofeng Li, Minhui Xue, Gareth Tyson, Haojin Zhu

We find that (i) commercial antivirus engines are vulnerable to AMM-guided manipulated samples; (ii) the ability of a manipulated malware generated using one detector to evade detection by another detector (i. e., transferability) depends on the overlap of features with large AMM values between the different detectors; and (iii) AMM values effectively measure the importance of features and explain the ability to evade detection.

TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems

no code implementations19 Nov 2021 Bao Gia Doan, Minhui Xue, Shiqing Ma, Ehsan Abbasnejad, Damith C. Ranasinghe

Now, an adversary can arm themselves with a patch that is naturalistic, less malicious-looking, physically realizable, highly effective -- achieving high attack success rates, and universal.

Data Hiding with Deep Learning: A Survey Unifying Digital Watermarking and Steganography

no code implementations20 Jul 2021 Olivia Byrnes, Wendy La, Hu Wang, Congbo Ma, Minhui Xue, Qi Wu

Data hiding is the process of embedding information into a noise-tolerant signal such as a piece of audio, video, or image.

Hidden Backdoors in Human-Centric Language Models

1 code implementation1 May 2021 Shaofeng Li, Hui Liu, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Haojin Zhu, Jialiang Lu

We are able to demonstrate the adversary's high success rate of attacks, while maintaining functionality for regular users, with triggers inconspicuous by the human administrators.

Language Modelling Machine Translation +1

Oriole: Thwarting Privacy against Trustworthy Deep Learning Models

no code implementations23 Feb 2021 Liuqiao Chen, Hu Wang, Benjamin Zi Hao Zhao, Minhui Xue, Haifeng Qian

Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy.

Data Poisoning Face Recognition +1

Delayed Rewards Calibration via Reward Empirical Sufficiency

no code implementations21 Feb 2021 Yixuan Liu, Hu Wang, Xiaowei Wang, Xiaoyue Sun, Liuyue Jiang, Minhui Xue

Therefore, a purify-trained classifier is designed to obtain the distribution and generate the calibrated rewards.

Deep Learning Backdoors

no code implementations16 Jul 2020 Shaofeng Li, Shiqing Ma, Minhui Xue, Benjamin Zi Hao Zhao

The trigger can take a plethora of forms, including a special object present in the image (e. g., a yellow pad), a shape filled with custom textures (e. g., logos with particular colors) or even image-wide stylizations with special filters (e. g., images altered by Nashville or Gotham filters).

Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization

no code implementations6 Sep 2019 Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, Xinpeng Zhang

We show that the proposed invisible backdoors can be fairly effective across various DNN models as well as four datasets MNIST, CIFAR-10, CIFAR-100, and GTSRB, by measuring their attack success rates for the adversary, functionality for the normal users, and invisibility scores for the administrators.

Differentially Private Data Generative Models

no code implementations6 Dec 2018 Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaarfar, Haojin Zhu

We conjecture that the key to defend against the model inversion and GAN-based attacks is not due to differential privacy but the perturbation of training data.

Federated Learning Inference Attack +1

Secure Deep Learning Engineering: A Software Quality Assurance Perspective

no code implementations10 Oct 2018 Lei Ma, Felix Juefei-Xu, Minhui Xue, Qiang Hu, Sen Chen, Bo Li, Yang Liu, Jianjun Zhao, Jianxiong Yin, Simon See

Over the past decades, deep learning (DL) systems have achieved tremendous success and gained great popularity in various applications, such as intelligent machines, image processing, speech processing, and medical diagnostics.

DeepHunter: Hunting Deep Neural Network Defects via Coverage-Guided Fuzzing

no code implementations4 Sep 2018 Xiaofei Xie, Lei Ma, Felix Juefei-Xu, Hongxu Chen, Minhui Xue, Bo Li, Yang Liu, Jianjun Zhao, Jianxiong Yin, Simon See

In company with the data explosion over the past decade, deep neural network (DNN) based software has experienced unprecedented leap and is becoming the key driving force of many novel industrial applications, including many safety-critical scenarios such as autonomous driving.

Autonomous Driving Quantization

Combinatorial Testing for Deep Learning Systems

no code implementations20 Jun 2018 Lei Ma, Fuyuan Zhang, Minhui Xue, Bo Li, Yang Liu, Jianjun Zhao, Yadong Wang

Deep learning (DL) has achieved remarkable progress over the past decade and been widely applied to many safety-critical applications.

Defect Detection

DeepMutation: Mutation Testing of Deep Learning Systems

4 code implementations14 May 2018 Lei Ma, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Felix Juefei-Xu, Chao Xie, Li Li, Yang Liu, Jianjun Zhao, Yadong Wang

To do this, by sharing the same spirit of mutation testing in traditional software, we first define a set of source-level mutation operators to inject faults to the source of DL (i. e., training data and training programs).

Software Engineering

DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems

no code implementations20 Mar 2018 Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, Jianjun Zhao, Yadong Wang

Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data.

Adversarial Attack Defect Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.