Search Results for author: Can He

Found 6 papers, 0 papers with code

One-to-N & N-to-One: Two Advanced Backdoor Attacks Against Deep Learning Models

no code implementations IEEE Transactions on Dependable and Secure Computing 2022 Mingfu Xue, Can He, Jian Wang, and Weiqiang Liu

In this article, for the first time, we propose two advanced backdoor attacks, the multi-target backdoor attacks and multi-trigger backdoor attacks: 1) One-to-N attack, where the attacker can trigger multiple backdoor targets by controlling the different intensities of the same backdoor; 2) N-to-One attack, where such attack is triggered only when all the N backdoors are satisfied.

Face Recognition

Robust Backdoor Attacks against Deep Neural Networks in Real Physical World

no code implementations15 Apr 2021 Mingfu Xue, Can He, Shichang Sun, Jian Wang, Weiqiang Liu

In this paper, we propose a robust physical backdoor attack method, PTB (physical transformations for backdoors), to implement the backdoor attacks against deep learning models in the real physical world.

Backdoor Attack Face Recognition

ActiveGuard: An Active DNN IP Protection Technique via Adversarial Examples

no code implementations2 Mar 2021 Mingfu Xue, Shichang Sun, Can He, Yushu Zhang, Jian Wang, Weiqiang Liu

For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of the DNN model will not be affected.

Management

3D Invisible Cloak

no code implementations27 Nov 2020 Mingfu Xue, Can He, Zhiyu Wu, Jian Wang, Zhe Liu, Weiqiang Liu

on person stealth attacks, and propose 3D transformations to generate 3D invisible cloak.

NaturalAE: Natural and Robust Physical Adversarial Examples for Object Detectors

no code implementations27 Nov 2020 Mingfu Xue, Chengxiang Yuan, Can He, Jian Wang, Weiqiang Liu

Experimental results demonstrate that, the generated adversarial examples are robust under various indoor and outdoor physical conditions, including different distances, angles, illuminations, and photographing.

Adversarial Attack object-detection +1

SocialGuard: An Adversarial Example Based Privacy-Preserving Technique for Social Images

no code implementations27 Nov 2020 Mingfu Xue, Shichang Sun, Zhiyu Wu, Can He, Jian Wang, Weiqiang Liu

After being injected with the perturbation, the social image can easily fool the object detector, while its visual quality will not be degraded.

Object Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.