Search Results for author: Minhui Xue

Found 32 papers, 10 papers with code

LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model

no code implementations18 Mar 2024 Yuxin Cao, Jinghao Li, Xi Xiao, Derui Wang, Minhui Xue, Hao Ge, Wei Liu, Guangwu Hu

Benefiting from the popularity and scalably usability of Segment Anything Model (SAM), we first extract different regions according to semantic information and then track them through the video stream to maintain the temporal consistency.

Adversarial Attack Style Transfer +2

Reinforcement Unlearning

no code implementations26 Dec 2023 Dayong Ye, Tianqing Zhu, Congcong Zhu, Derui Wang, Zewei Shi, Sheng Shen, Wanlei Zhou, Minhui Xue

Machine unlearning refers to the process of mitigating the influence of specific training data on machine learning models based on removal requests from data owners.

Inference Attack Machine Unlearning +1

MFABA: A More Faithful and Accelerated Boundary-based Attribution Method for Deep Neural Networks

1 code implementation21 Dec 2023 Zhiyu Zhu, Huaming Chen, Jiayu Zhang, Xinyi Wang, Zhibo Jin, Minhui Xue, Dongxiao Zhu, Kim-Kwang Raymond Choo

To better understand the output of deep neural networks (DNN), attribution based methods have been an important approach for model interpretability, which assign a score for each input dimension to indicate its importance towards the model outcome.

LogoStyleFool: Vitiating Video Recognition Systems via Logo Style Transfer

1 code implementation15 Dec 2023 Yuxin Cao, Ziyu Zhao, Xi Xiao, Derui Wang, Minhui Xue, Jin Lu

We separate the attack into three stages: style reference selection, reinforcement-learning-based logo style transfer, and perturbation optimization.

reinforcement-learning Style Transfer +1

GraphGuard: Detecting and Counteracting Training Data Misuse in Graph Neural Networks

1 code implementation13 Dec 2023 Bang Wu, He Zhang, Xiangwen Yang, Shuo Wang, Minhui Xue, Shirui Pan, Xingliang Yuan

These limitations call for an effective and comprehensive solution that detects and mitigates data misuse without requiring exact training data while respecting the proprietary nature of such data.

RAI4IoE: Responsible AI for Enabling the Internet of Energy

no code implementations20 Sep 2023 Minhui Xue, Surya Nepal, Ling Liu, Subbu Sethuvenkatraman, Xingliang Yuan, Carsten Rudolph, Ruoxi Sun, Greg Eisenhauer

This paper plans to develop an Equitable and Responsible AI framework with enabling techniques and algorithms for the Internet of Energy (IoE), in short, RAI4IoE.

Management

VeriDIP: Verifying Ownership of Deep Neural Networks through Privacy Leakage Fingerprints

no code implementations7 Sep 2023 Aoting Hu, Zhigang Lu, Renjie Xie, Minhui Xue

(2) We introduce a novel approach using less private samples to enhance the performance of ownership testing.

The "Beatrix'' Resurrections: Robust Backdoor Detection via Gram Matrices

1 code implementation23 Sep 2022 Wanlun Ma, Derui Wang, Ruoxi Sun, Minhui Xue, Sheng Wen, Yang Xiang

However, recent advanced backdoor attacks show that this assumption is no longer valid in dynamic backdoors where the triggers vary from input to input, thereby defeating the existing defenses.

valid

M^4I: Multi-modal Models Membership Inference

1 code implementation15 Sep 2022 Pingyi Hu, Zihan Wang, Ruoxi Sun, Hu Wang, Minhui Xue

To achieve this, we propose Multi-modal Models Membership Inference (M^4I) with two attack methods to infer the membership status, named metric-based (MB) M^4I and feature-based (FB) M^4I, respectively.

Image Captioning Inference Attack +2

StyleFool: Fooling Video Classification Systems via Style Transfer

1 code implementation30 Mar 2022 Yuxin Cao, Xi Xiao, Ruoxi Sun, Derui Wang, Minhui Xue, Sheng Wen

In this paper, we focus on unrestricted perturbations and propose StyleFool, a black-box video adversarial attack via style transfer to fool the video classification system.

Adversarial Attack Classification +3

PublicCheck: Public Integrity Verification for Services of Run-time Deep Models

no code implementations21 Mar 2022 Shuo Wang, Sharif Abuadbba, Sidharth Agarwal, Kristen Moore, Ruoxi Sun, Minhui Xue, Surya Nepal, Seyit Camtepe, Salil Kanhere

Existing integrity verification approaches for deep models are designed for private verification (i. e., assuming the service provider is honest, with white-box access to model parameters).

Model Compression

Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations

no code implementations CVPR 2022 Zirui Peng, Shaofeng Li, Guoxing Chen, Cheng Zhang, Haojin Zhu, Minhui Xue

In this paper, we propose a novel and practical mechanism which enables the service provider to verify whether a suspect model is stolen from the victim model via model extraction attacks.

Contrastive Learning Model extraction

PPA: Preference Profiling Attack Against Federated Learning

no code implementations10 Feb 2022 Chunyi Zhou, Yansong Gao, Anmin Fu, Kai Chen, Zhiyang Dai, Zhi Zhang, Minhui Xue, Yuqing Zhang

By observing a user model's gradient sensitivity to a class, PPA can profile the sample proportion of the class in the user's local dataset, and thus the user's preference of the class is exposed.

Federated Learning Inference Attack

TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems

no code implementations19 Nov 2021 Bao Gia Doan, Minhui Xue, Shiqing Ma, Ehsan Abbasnejad, Damith C. Ranasinghe

Now, an adversary can arm themselves with a patch that is naturalistic, less malicious-looking, physically realizable, highly effective achieving high attack success rates, and universal.

Mate! Are You Really Aware? An Explainability-Guided Testing Framework for Robustness of Malware Detectors

1 code implementation19 Nov 2021 Ruoxi Sun, Minhui Xue, Gareth Tyson, Tian Dong, Shaofeng Li, Shuo Wang, Haojin Zhu, Seyit Camtepe, Surya Nepal

We find that (i) commercial antivirus engines are vulnerable to AMM-guided test cases; (ii) the ability of a manipulated malware generated using one detector to evade detection by another detector (i. e., transferability) depends on the overlap of features with large AMM values between the different detectors; and (iii) AMM values effectively measure the fragility of features (i. e., capability of feature-space manipulation to flip the prediction results) and explain the robustness of malware detectors facing evasion attacks.

Data Hiding with Deep Learning: A Survey Unifying Digital Watermarking and Steganography

no code implementations20 Jul 2021 Zihan Wang, Olivia Byrnes, Hu Wang, Ruoxi Sun, Congbo Ma, Huaming Chen, Qi Wu, Minhui Xue

The advancement of secure communication and identity verification fields has significantly increased through the use of deep learning techniques for data hiding.

Hidden Backdoors in Human-Centric Language Models

1 code implementation1 May 2021 Shaofeng Li, Hui Liu, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Haojin Zhu, Jialiang Lu

We are able to demonstrate the adversary's high success rate of attacks, while maintaining functionality for regular users, with triggers inconspicuous by the human administrators.

Language Modelling Machine Translation +2

Oriole: Thwarting Privacy against Trustworthy Deep Learning Models

no code implementations23 Feb 2021 Liuqiao Chen, Hu Wang, Benjamin Zi Hao Zhao, Minhui Xue, Haifeng Qian

Deep Neural Networks have achieved unprecedented success in the field of face recognition such that any individual can crawl the data of others from the Internet without their explicit permission for the purpose of training high-precision face recognition models, creating a serious violation of privacy.

Data Poisoning Face Recognition +2

Delayed Rewards Calibration via Reward Empirical Sufficiency

no code implementations21 Feb 2021 Yixuan Liu, Hu Wang, Xiaowei Wang, Xiaoyue Sun, Liuyue Jiang, Minhui Xue

Therefore, a purify-trained classifier is designed to obtain the distribution and generate the calibrated rewards.

Deep Learning Backdoors

no code implementations16 Jul 2020 Shaofeng Li, Shiqing Ma, Minhui Xue, Benjamin Zi Hao Zhao

The trigger can take a plethora of forms, including a special object present in the image (e. g., a yellow pad), a shape filled with custom textures (e. g., logos with particular colors) or even image-wide stylizations with special filters (e. g., images altered by Nashville or Gotham filters).

Backdoor Attack

Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization

1 code implementation6 Sep 2019 Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, Xinpeng Zhang

We show that the proposed invisible backdoors can be fairly effective across various DNN models as well as four datasets MNIST, CIFAR-10, CIFAR-100, and GTSRB, by measuring their attack success rates for the adversary, functionality for the normal users, and invisibility scores for the administrators.

Differentially Private Data Generative Models

no code implementations6 Dec 2018 Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaarfar, Haojin Zhu

We conjecture that the key to defend against the model inversion and GAN-based attacks is not due to differential privacy but the perturbation of training data.

BIG-bench Machine Learning Federated Learning +2

Secure Deep Learning Engineering: A Software Quality Assurance Perspective

no code implementations10 Oct 2018 Lei Ma, Felix Juefei-Xu, Minhui Xue, Qiang Hu, Sen Chen, Bo Li, Yang Liu, Jianjun Zhao, Jianxiong Yin, Simon See

Over the past decades, deep learning (DL) systems have achieved tremendous success and gained great popularity in various applications, such as intelligent machines, image processing, speech processing, and medical diagnostics.

DeepHunter: Hunting Deep Neural Network Defects via Coverage-Guided Fuzzing

no code implementations4 Sep 2018 Xiaofei Xie, Lei Ma, Felix Juefei-Xu, Hongxu Chen, Minhui Xue, Bo Li, Yang Liu, Jianjun Zhao, Jianxiong Yin, Simon See

In company with the data explosion over the past decade, deep neural network (DNN) based software has experienced unprecedented leap and is becoming the key driving force of many novel industrial applications, including many safety-critical scenarios such as autonomous driving.

Autonomous Driving Quantization

Combinatorial Testing for Deep Learning Systems

no code implementations20 Jun 2018 Lei Ma, Fuyuan Zhang, Minhui Xue, Bo Li, Yang Liu, Jianjun Zhao, Yadong Wang

Deep learning (DL) has achieved remarkable progress over the past decade and been widely applied to many safety-critical applications.

Defect Detection

DeepMutation: Mutation Testing of Deep Learning Systems

4 code implementations14 May 2018 Lei Ma, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Felix Juefei-Xu, Chao Xie, Li Li, Yang Liu, Jianjun Zhao, Yadong Wang

To do this, by sharing the same spirit of mutation testing in traditional software, we first define a set of source-level mutation operators to inject faults to the source of DL (i. e., training data and training programs).

Software Engineering

DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems

no code implementations20 Mar 2018 Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, Jianjun Zhao, Yadong Wang

Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data.

Adversarial Attack Defect Detection

Cannot find the paper you are looking for? You can Submit a new open access paper.