Search Results for author: Haojin Zhu

Found 5 papers, 3 papers with code

Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations

no code implementations CVPR 2022 Zirui Peng, Shaofeng Li, Guoxing Chen, Cheng Zhang, Haojin Zhu, Minhui Xue

In this paper, we propose a novel and practical mechanism which enables the service provider to verify whether a suspect model is stolen from the victim model via model extraction attacks.

Contrastive Learning Model extraction

Mate! Are You Really Aware? An Explainability-Guided Testing Framework for Robustness of Malware Detectors

1 code implementation19 Nov 2021 Ruoxi Sun, Minhui Xue, Gareth Tyson, Tian Dong, Shaofeng Li, Shuo Wang, Haojin Zhu, Seyit Camtepe, Surya Nepal

We find that (i) commercial antivirus engines are vulnerable to AMM-guided test cases; (ii) the ability of a manipulated malware generated using one detector to evade detection by another detector (i. e., transferability) depends on the overlap of features with large AMM values between the different detectors; and (iii) AMM values effectively measure the fragility of features (i. e., capability of feature-space manipulation to flip the prediction results) and explain the robustness of malware detectors facing evasion attacks.

Hidden Backdoors in Human-Centric Language Models

1 code implementation1 May 2021 Shaofeng Li, Hui Liu, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Haojin Zhu, Jialiang Lu

We are able to demonstrate the adversary's high success rate of attacks, while maintaining functionality for regular users, with triggers inconspicuous by the human administrators.

Language Modelling Machine Translation +2

Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization

1 code implementation6 Sep 2019 Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, Xinpeng Zhang

We show that the proposed invisible backdoors can be fairly effective across various DNN models as well as four datasets MNIST, CIFAR-10, CIFAR-100, and GTSRB, by measuring their attack success rates for the adversary, functionality for the normal users, and invisibility scores for the administrators.

Differentially Private Data Generative Models

no code implementations6 Dec 2018 Qingrong Chen, Chong Xiang, Minhui Xue, Bo Li, Nikita Borisov, Dali Kaarfar, Haojin Zhu

We conjecture that the key to defend against the model inversion and GAN-based attacks is not due to differential privacy but the perturbation of training data.

BIG-bench Machine Learning Federated Learning +2

Cannot find the paper you are looking for? You can Submit a new open access paper.