no code implementations • IEEE Transactions on Dependable and Secure Computing 2022 • Mingfu Xue, Can He, Jian Wang, and Weiqiang Liu
In this article, for the first time, we propose two advanced backdoor attacks, the multi-target backdoor attacks and multi-trigger backdoor attacks: 1) One-to-N attack, where the attacker can trigger multiple backdoor targets by controlling the different intensities of the same backdoor; 2) N-to-One attack, where such attack is triggered only when all the N backdoors are satisfied.
no code implementations • 15 Apr 2021 • Mingfu Xue, Can He, Shichang Sun, Jian Wang, Weiqiang Liu
In this paper, we propose a robust physical backdoor attack method, PTB (physical transformations for backdoors), to implement the backdoor attacks against deep learning models in the real physical world.
no code implementations • 2 Mar 2021 • Mingfu Xue, Shichang Sun, Can He, Yushu Zhang, Jian Wang, Weiqiang Liu
For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of the DNN model will not be affected.
no code implementations • 27 Nov 2020 • Mingfu Xue, Can He, Zhiyu Wu, Jian Wang, Zhe Liu, Weiqiang Liu
on person stealth attacks, and propose 3D transformations to generate 3D invisible cloak.
no code implementations • 27 Nov 2020 • Mingfu Xue, Chengxiang Yuan, Can He, Jian Wang, Weiqiang Liu
Experimental results demonstrate that, the generated adversarial examples are robust under various indoor and outdoor physical conditions, including different distances, angles, illuminations, and photographing.
no code implementations • 27 Nov 2020 • Mingfu Xue, Shichang Sun, Zhiyu Wu, Can He, Jian Wang, Weiqiang Liu
After being injected with the perturbation, the social image can easily fool the object detector, while its visual quality will not be degraded.