Search Results for author: Ruigang Liang

Found 8 papers, 3 papers with code

MEA-Defender: A Robust Watermark against Model Extraction Attack

1 code implementation26 Jan 2024 Peizhuo Lv, Hualong Ma, Kai Chen, Jiachen Zhou, Shengzhi Zhang, Ruigang Liang, Shenchen Zhu, Pan Li, Yingjun Zhang

To protect the Intellectual Property (IP) of the original owners over such DNN models, backdoor-based watermarks have been extensively studied.

Model extraction Self-Supervised Learning

Boosting Neural Networks to Decompile Optimized Binaries

no code implementations3 Jan 2023 Ying Cao, Ruigang Liang, Kai Chen, Peiwei Hu

They formulate the decompilation process as a translation problem between LPL and HPL, aiming to reduce the human cost required to develop decompilation tools and improve their generalizability.

Machine Translation Malware Analysis +2

A Novel Membership Inference Attack against Dynamic Neural Networks by Utilizing Policy Networks Information

no code implementations17 Oct 2022 Pan Li, Peizhuo Lv, Shenchen Zhu, Ruigang Liang, Kai Chen

Although traditional static DNNs are vulnerable to the membership inference attack (MIA) , which aims to infer whether a particular point was used to train the model, little is known about how such an attack performs on the dynamic NNs.

Computational Efficiency Image Classification +2

SSL-WM: A Black-Box Watermarking Approach for Encoders Pre-trained by Self-supervised Learning

1 code implementation8 Sep 2022 Peizhuo Lv, Pan Li, Shenchen Zhu, Shengzhi Zhang, Kai Chen, Ruigang Liang, Chang Yue, Fan Xiang, Yuling Cai, Hualong Ma, Yingjun Zhang, Guozhu Meng

Recent years have witnessed tremendous success in Self-Supervised Learning (SSL), which has been widely utilized to facilitate various downstream tasks in Computer Vision (CV) and Natural Language Processing (NLP) domains.

Self-Supervised Learning

Invisible Backdoor Attacks Using Data Poisoning in the Frequency Domain

no code implementations9 Jul 2022 Chang Yue, Peizhuo Lv, Ruigang Liang, Kai Chen

However, most of the triggers used in the current study are fixed patterns patched on a small fraction of an image and are often clearly mislabeled, which is easily detected by humans or defense methods such as Neural Cleanse and SentiNet.

Backdoor Attack Data Poisoning +1

DBIA: Data-free Backdoor Injection Attack against Transformer Networks

1 code implementation22 Nov 2021 Peizhuo Lv, Hualong Ma, Jiachen Zhou, Ruigang Liang, Kai Chen, Shengzhi Zhang, Yunfei Yang

In this paper, we propose DBIA, a novel data-free backdoor attack against the CV-oriented transformer networks, leveraging the inherent attention mechanism of transformers to generate triggers and injecting the backdoor using the poisoned surrogate dataset.

Backdoor Attack Image Classification +1

HufuNet: Embedding the Left Piece as Watermark and Keeping the Right Piece for Ownership Verification in Deep Neural Networks

no code implementations25 Mar 2021 Peizhuo Lv, Pan Li, Shengzhi Zhang, Kai Chen, Ruigang Liang, Yue Zhao, Yingjiu Li

Most existing solutions embed backdoors in DNN model training such that DNN ownership can be verified by triggering distinguishable model behaviors with a set of secret inputs.

Seeing isn't Believing: Practical Adversarial Attack Against Object Detectors

no code implementations26 Dec 2018 Yue Zhao, Hong Zhu, Ruigang Liang, Qintao Shen, Shengzhi Zhang, Kai Chen

In this paper, we presented systematic solutions to build robust and practical AEs against real world object detectors.

Adversarial Attack Autonomous Driving +1

Cannot find the paper you are looking for? You can Submit a new open access paper.