Search Results for author: Shaofeng Li

Found 7 papers, 3 papers with code

Unbridled Icarus: A Survey of the Potential Perils of Image Inputs in Multimodal Large Language Model Security

no code implementations8 Apr 2024 Yihe Fan, Yuxin Cao, Ziyu Zhao, Ziyao Liu, Shaofeng Li

Multimodal Large Language Models (MLLMs) demonstrate remarkable capabilities that increasingly influence various aspects of our daily lives, constantly defining the new boundary of Artificial General Intelligence (AGI).

Language Modelling Large Language Model

Seeing is not always believing: The Space of Harmless Perturbations

no code implementations3 Feb 2024 Lu Chen, Shaofeng Li, Benhao Huang, Fan Yang, Zheng Li, Jie Li, Yuan Luo

In the context of deep neural networks, we expose the existence of a harmless perturbation space, where perturbations leave the network output entirely unaltered.

Privacy Preserving

Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations

no code implementations CVPR 2022 Zirui Peng, Shaofeng Li, Guoxing Chen, Cheng Zhang, Haojin Zhu, Minhui Xue

In this paper, we propose a novel and practical mechanism which enables the service provider to verify whether a suspect model is stolen from the victim model via model extraction attacks.

Contrastive Learning Model extraction

Mate! Are You Really Aware? An Explainability-Guided Testing Framework for Robustness of Malware Detectors

1 code implementation19 Nov 2021 Ruoxi Sun, Minhui Xue, Gareth Tyson, Tian Dong, Shaofeng Li, Shuo Wang, Haojin Zhu, Seyit Camtepe, Surya Nepal

We find that (i) commercial antivirus engines are vulnerable to AMM-guided test cases; (ii) the ability of a manipulated malware generated using one detector to evade detection by another detector (i. e., transferability) depends on the overlap of features with large AMM values between the different detectors; and (iii) AMM values effectively measure the fragility of features (i. e., capability of feature-space manipulation to flip the prediction results) and explain the robustness of malware detectors facing evasion attacks.

Hidden Backdoors in Human-Centric Language Models

1 code implementation1 May 2021 Shaofeng Li, Hui Liu, Tian Dong, Benjamin Zi Hao Zhao, Minhui Xue, Haojin Zhu, Jialiang Lu

We are able to demonstrate the adversary's high success rate of attacks, while maintaining functionality for regular users, with triggers inconspicuous by the human administrators.

Language Modelling Machine Translation +2

Deep Learning Backdoors

no code implementations16 Jul 2020 Shaofeng Li, Shiqing Ma, Minhui Xue, Benjamin Zi Hao Zhao

The trigger can take a plethora of forms, including a special object present in the image (e. g., a yellow pad), a shape filled with custom textures (e. g., logos with particular colors) or even image-wide stylizations with special filters (e. g., images altered by Nashville or Gotham filters).

Backdoor Attack

Invisible Backdoor Attacks on Deep Neural Networks via Steganography and Regularization

1 code implementation6 Sep 2019 Shaofeng Li, Minhui Xue, Benjamin Zi Hao Zhao, Haojin Zhu, Xinpeng Zhang

We show that the proposed invisible backdoors can be fairly effective across various DNN models as well as four datasets MNIST, CIFAR-10, CIFAR-100, and GTSRB, by measuring their attack success rates for the adversary, functionality for the normal users, and invisibility scores for the administrators.

Cannot find the paper you are looking for? You can Submit a new open access paper.