Search Results for author: Mingfu Xue

Found 15 papers, 0 papers with code

Turn Passive to Active: A Survey on Active Intellectual Property Protection of Deep Learning Models

no code implementations15 Oct 2023 Mingfu Xue, Leo Yu Zhang, Yushu Zhang, Weiqiang Liu

In this review, we attempt to clearly elaborate on the connotation, attributes, and requirements of active DNN copyright protection, provide evaluation methods and metrics for active copyright protection, review and analyze existing work on active DL model intellectual property protection, discuss potential attacks that active DL model copyright protection techniques may face, and provide challenges and future directions for active DL model intellectual property protection.

Management

InFIP: An Explainable DNN Intellectual Property Protection Method based on Intrinsic Features

no code implementations14 Oct 2022 Mingfu Xue, Xin Wang, Yinghao Wu, Shifeng Ni, Yushu Zhang, Weiqiang Liu

Since the intrinsic feature is composed of unique interpretation of the model's decision, the intrinsic feature can be regarded as fingerprint of the model.

Explainable artificial intelligence

One-to-N & N-to-One: Two Advanced Backdoor Attacks Against Deep Learning Models

no code implementations IEEE Transactions on Dependable and Secure Computing 2022 Mingfu Xue, Can He, Jian Wang, and Weiqiang Liu

In this article, for the first time, we propose two advanced backdoor attacks, the multi-target backdoor attacks and multi-trigger backdoor attacks: 1) One-to-N attack, where the attacker can trigger multiple backdoor targets by controlling the different intensities of the same backdoor; 2) N-to-One attack, where such attack is triggered only when all the N backdoors are satisfied.

Face Recognition

Detecting Recolored Image by Spatial Correlation

no code implementations23 Apr 2022 Yushu Zhang, Nuo Chen, Shuren Qi, Mingfu Xue, Xiaochun Cao

In this paper, we try to explore a solution from the perspective of the spatial correlation, which exhibits the generic detection capability for both conventional and deep learning-based recoloring.

Image Forensics Image Manipulation

Detect and remove watermark in deep neural networks via generative adversarial networks

no code implementations15 Jun 2021 Haoqi Wang, Mingfu Xue, Shichang Sun, Yushu Zhang, Jian Wang, Weiqiang Liu

Experimental evaluations on the MNIST and CIFAR10 datasets demonstrate that, the proposed method can effectively remove about 98% of the watermark in DNN models, as the watermark retention rate reduces from 100% to less than 2% after applying the proposed attack.

Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations

no code implementations29 May 2021 Mingfu Xue, Yinghao Wu, Zhiyu Wu, Yushu Zhang, Jian Wang, Weiqiang Liu

Experimental results show that, the backdoor detection rate of the proposed defense method is 99. 63%, 99. 76% and 99. 91% on Fashion-MNIST, CIFAR-10 and GTSRB datasets, respectively.

Backdoor Attack

AdvParams: An Active DNN Intellectual Property Protection Technique via Adversarial Perturbation Based Parameter Encryption

no code implementations28 May 2021 Mingfu Xue, Zhiyu Wu, Jian Wang, Yushu Zhang, Weiqiang Liu

Moreover, the proposed method only needs to encrypt an extremely low number of parameters, and the proportion of the encrypted parameters of all the model's parameters is as low as 0. 000205%.

Protecting the Intellectual Properties of Deep Neural Networks with an Additional Class and Steganographic Images

no code implementations19 Apr 2021 Shichang Sun, Mingfu Xue, Jian Wang, Weiqiang Liu

To address these challenges, in this paper, we propose a method to protect the intellectual properties of DNN models by using an additional class and steganographic images.

Image Steganography Management

Robust Backdoor Attacks against Deep Neural Networks in Real Physical World

no code implementations15 Apr 2021 Mingfu Xue, Can He, Shichang Sun, Jian Wang, Weiqiang Liu

In this paper, we propose a robust physical backdoor attack method, PTB (physical transformations for backdoors), to implement the backdoor attacks against deep learning models in the real physical world.

Backdoor Attack Face Recognition

ActiveGuard: An Active DNN IP Protection Technique via Adversarial Examples

no code implementations2 Mar 2021 Mingfu Xue, Shichang Sun, Can He, Yushu Zhang, Jian Wang, Weiqiang Liu

For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of the DNN model will not be affected.

Management

3D Invisible Cloak

no code implementations27 Nov 2020 Mingfu Xue, Can He, Zhiyu Wu, Jian Wang, Zhe Liu, Weiqiang Liu

on person stealth attacks, and propose 3D transformations to generate 3D invisible cloak.

NaturalAE: Natural and Robust Physical Adversarial Examples for Object Detectors

no code implementations27 Nov 2020 Mingfu Xue, Chengxiang Yuan, Can He, Jian Wang, Weiqiang Liu

Experimental results demonstrate that, the generated adversarial examples are robust under various indoor and outdoor physical conditions, including different distances, angles, illuminations, and photographing.

Adversarial Attack object-detection +1

SocialGuard: An Adversarial Example Based Privacy-Preserving Technique for Social Images

no code implementations27 Nov 2020 Mingfu Xue, Shichang Sun, Zhiyu Wu, Can He, Jian Wang, Weiqiang Liu

After being injected with the perturbation, the social image can easily fool the object detector, while its visual quality will not be degraded.

Object Privacy Preserving

Cannot find the paper you are looking for? You can Submit a new open access paper.