Search Results for author: Weiqiang Liu

Found 14 papers, 0 papers with code

Turn Passive to Active: A Survey on Active Intellectual Property Protection of Deep Learning Models

no code implementations15 Oct 2023 Mingfu Xue, Leo Yu Zhang, Yushu Zhang, Weiqiang Liu

In this review, we attempt to clearly elaborate on the connotation, attributes, and requirements of active DNN copyright protection, provide evaluation methods and metrics for active copyright protection, review and analyze existing work on active DL model intellectual property protection, discuss potential attacks that active DL model copyright protection techniques may face, and provide challenges and future directions for active DL model intellectual property protection.

Management

InFIP: An Explainable DNN Intellectual Property Protection Method based on Intrinsic Features

no code implementations14 Oct 2022 Mingfu Xue, Xin Wang, Yinghao Wu, Shifeng Ni, Yushu Zhang, Weiqiang Liu

Since the intrinsic feature is composed of unique interpretation of the model's decision, the intrinsic feature can be regarded as fingerprint of the model.

Explainable artificial intelligence

Deep learning based sferics recognition for AMT data processing in the dead band

no code implementations22 Sep 2022 Enhua Jiang, Rujun Chen, Xinming Wu, Jianxin Liu, Debin Zhu, Weiqiang Liu

The subsequent processing results show that our method can significantly improve S/N and effectively solve the problem of lack of energy in dead band.

Time Series Time Series Analysis

Detect and remove watermark in deep neural networks via generative adversarial networks

no code implementations15 Jun 2021 Haoqi Wang, Mingfu Xue, Shichang Sun, Yushu Zhang, Jian Wang, Weiqiang Liu

Experimental evaluations on the MNIST and CIFAR10 datasets demonstrate that, the proposed method can effectively remove about 98% of the watermark in DNN models, as the watermark retention rate reduces from 100% to less than 2% after applying the proposed attack.

Detecting Backdoor in Deep Neural Networks via Intentional Adversarial Perturbations

no code implementations29 May 2021 Mingfu Xue, Yinghao Wu, Zhiyu Wu, Yushu Zhang, Jian Wang, Weiqiang Liu

Experimental results show that, the backdoor detection rate of the proposed defense method is 99. 63%, 99. 76% and 99. 91% on Fashion-MNIST, CIFAR-10 and GTSRB datasets, respectively.

Backdoor Attack

AdvParams: An Active DNN Intellectual Property Protection Technique via Adversarial Perturbation Based Parameter Encryption

no code implementations28 May 2021 Mingfu Xue, Zhiyu Wu, Jian Wang, Yushu Zhang, Weiqiang Liu

Moreover, the proposed method only needs to encrypt an extremely low number of parameters, and the proportion of the encrypted parameters of all the model's parameters is as low as 0. 000205%.

Protecting the Intellectual Properties of Deep Neural Networks with an Additional Class and Steganographic Images

no code implementations19 Apr 2021 Shichang Sun, Mingfu Xue, Jian Wang, Weiqiang Liu

To address these challenges, in this paper, we propose a method to protect the intellectual properties of DNN models by using an additional class and steganographic images.

Image Steganography Management

Robust Backdoor Attacks against Deep Neural Networks in Real Physical World

no code implementations15 Apr 2021 Mingfu Xue, Can He, Shichang Sun, Jian Wang, Weiqiang Liu

In this paper, we propose a robust physical backdoor attack method, PTB (physical transformations for backdoors), to implement the backdoor attacks against deep learning models in the real physical world.

Backdoor Attack Face Recognition

ActiveGuard: An Active DNN IP Protection Technique via Adversarial Examples

no code implementations2 Mar 2021 Mingfu Xue, Shichang Sun, Can He, Yushu Zhang, Jian Wang, Weiqiang Liu

For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of the DNN model will not be affected.

Management

3D Invisible Cloak

no code implementations27 Nov 2020 Mingfu Xue, Can He, Zhiyu Wu, Jian Wang, Zhe Liu, Weiqiang Liu

on person stealth attacks, and propose 3D transformations to generate 3D invisible cloak.

SocialGuard: An Adversarial Example Based Privacy-Preserving Technique for Social Images

no code implementations27 Nov 2020 Mingfu Xue, Shichang Sun, Zhiyu Wu, Can He, Jian Wang, Weiqiang Liu

After being injected with the perturbation, the social image can easily fool the object detector, while its visual quality will not be degraded.

Object Privacy Preserving

NaturalAE: Natural and Robust Physical Adversarial Examples for Object Detectors

no code implementations27 Nov 2020 Mingfu Xue, Chengxiang Yuan, Can He, Jian Wang, Weiqiang Liu

Experimental results demonstrate that, the generated adversarial examples are robust under various indoor and outdoor physical conditions, including different distances, angles, illuminations, and photographing.

Adversarial Attack object-detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.