no code implementations • 15 Oct 2023 • Mingfu Xue, Leo Yu Zhang, Yushu Zhang, Weiqiang Liu
In this review, we attempt to clearly elaborate on the connotation, attributes, and requirements of active DNN copyright protection, provide evaluation methods and metrics for active copyright protection, review and analyze existing work on active DL model intellectual property protection, discuss potential attacks that active DL model copyright protection techniques may face, and provide challenges and future directions for active DL model intellectual property protection.
no code implementations • 14 Oct 2022 • Mingfu Xue, Xin Wang, Yinghao Wu, Shifeng Ni, Yushu Zhang, Weiqiang Liu
Since the intrinsic feature is composed of unique interpretation of the model's decision, the intrinsic feature can be regarded as fingerprint of the model.
no code implementations • IEEE Transactions on Dependable and Secure Computing 2022 • Mingfu Xue, Can He, Jian Wang, and Weiqiang Liu
In this article, for the first time, we propose two advanced backdoor attacks, the multi-target backdoor attacks and multi-trigger backdoor attacks: 1) One-to-N attack, where the attacker can trigger multiple backdoor targets by controlling the different intensities of the same backdoor; 2) N-to-One attack, where such attack is triggered only when all the N backdoors are satisfied.
no code implementations • 23 Apr 2022 • Yushu Zhang, Nuo Chen, Shuren Qi, Mingfu Xue, Xiaochun Cao
In this paper, we try to explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.
no code implementations • 31 Jan 2022 • Mingfu Xue, Shifeng Ni, Yinghao Wu, Yushu Zhang, Jian Wang, Weiqiang Liu
Recent researches demonstrate that Deep Neural Networks (DNN) models are vulnerable to backdoor attacks.
no code implementations • 3 Jan 2022 • Mingfu Xue, Xin Wang, Shichang Sun, Yushu Zhang, Jian Wang, Weiqiang Liu
After training, the backdoor attack against DNN is robust to image compression.
no code implementations • 15 Jun 2021 • Haoqi Wang, Mingfu Xue, Shichang Sun, Yushu Zhang, Jian Wang, Weiqiang Liu
Experimental evaluations on the MNIST and CIFAR10 datasets demonstrate that, the proposed method can effectively remove about 98% of the watermark in DNN models, as the watermark retention rate reduces from 100% to less than 2% after applying the proposed attack.
no code implementations • 29 May 2021 • Mingfu Xue, Yinghao Wu, Zhiyu Wu, Yushu Zhang, Jian Wang, Weiqiang Liu
Experimental results show that, the backdoor detection rate of the proposed defense method is 99. 63%, 99. 76% and 99. 91% on Fashion-MNIST, CIFAR-10 and GTSRB datasets, respectively.
no code implementations • 28 May 2021 • Mingfu Xue, Zhiyu Wu, Jian Wang, Yushu Zhang, Weiqiang Liu
Moreover, the proposed method only needs to encrypt an extremely low number of parameters, and the proportion of the encrypted parameters of all the model's parameters is as low as 0. 000205%.
no code implementations • 19 Apr 2021 • Shichang Sun, Mingfu Xue, Jian Wang, Weiqiang Liu
To address these challenges, in this paper, we propose a method to protect the intellectual properties of DNN models by using an additional class and steganographic images.
no code implementations • 15 Apr 2021 • Mingfu Xue, Can He, Shichang Sun, Jian Wang, Weiqiang Liu
In this paper, we propose a robust physical backdoor attack method, PTB (physical transformations for backdoors), to implement the backdoor attacks against deep learning models in the real physical world.
no code implementations • 2 Mar 2021 • Mingfu Xue, Shichang Sun, Can He, Yushu Zhang, Jian Wang, Weiqiang Liu
For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of the DNN model will not be affected.
no code implementations • 27 Nov 2020 • Mingfu Xue, Can He, Zhiyu Wu, Jian Wang, Zhe Liu, Weiqiang Liu
on person stealth attacks, and propose 3D transformations to generate 3D invisible cloak.
no code implementations • 27 Nov 2020 • Mingfu Xue, Chengxiang Yuan, Can He, Jian Wang, Weiqiang Liu
Experimental results demonstrate that, the generated adversarial examples are robust under various indoor and outdoor physical conditions, including different distances, angles, illuminations, and photographing.
no code implementations • 27 Nov 2020 • Mingfu Xue, Shichang Sun, Zhiyu Wu, Can He, Jian Wang, Weiqiang Liu
After being injected with the perturbation, the social image can easily fool the object detector, while its visual quality will not be degraded.