Search Results for author: Kejiang Chen

Found 17 papers, 9 papers with code

Gaussian Shading: Provable Performance-Lossless Image Watermarking for Diffusion Models

no code implementations7 Apr 2024 Zijin Yang, Kai Zeng, Kejiang Chen, Han Fang, Weiming Zhang, Nenghai Yu

To address this issue, we propose Gaussian Shading, a diffusion model watermarking technique that is both performance-lossless and training-free, while serving the dual purpose of copyright protection and tracing of offending content.

Denoising

Provably Secure Disambiguating Neural Linguistic Steganography

1 code implementation26 Mar 2024 Yuang Qi, Kejiang Chen, Kai Zeng, Weiming Zhang, Nenghai Yu

SyncPool does not change the size of the candidate pool or the distribution of tokens and thus is applicable to provably secure language steganography methods.

Linguistic steganography

Control Risk for Potential Misuse of Artificial Intelligence in Science

1 code implementation11 Dec 2023 Jiyan He, Weitao Feng, Yaosen Min, Jingwei Yi, Kunsheng Tang, Shuai Li, Jie Zhang, Kejiang Chen, Wenbo Zhou, Xing Xie, Weiming Zhang, Nenghai Yu, Shuxin Zheng

In this study, we aim to raise awareness of the dangers of AI misuse in science, and call for responsible AI development and use in this domain.

Data-Free Hard-Label Robustness Stealing Attack

1 code implementation10 Dec 2023 Xiaojian Yuan, Kejiang Chen, Wen Huang, Jie Zhang, Weiming Zhang, Nenghai Yu

In response to these identified gaps, we introduce a novel Data-Free Hard-Label Robustness Stealing (DFHL-RS) attack in this paper, which enables the stealing of both model accuracy and robustness by simply querying hard labels of the target model without the help of any natural data.

LLM Paternity Test: Generated Text Detection with LLM Genetic Inheritance

no code implementations21 May 2023 Xiao Yu, Yuang Qi, Kejiang Chen, Guoqiang Chen, Xi Yang, Pengyuan Zhu, Weiming Zhang, Nenghai Yu

Large language models (LLMs) can generate texts that carry the risk of various misuses, including plagiarism, planting fake reviews on e-commerce platforms, or creating inflammatory false tweets.

Language Modelling Large Language Model +1

Watermarking Text Generated by Black-Box Language Models

1 code implementation14 May 2023 Xi Yang, Kejiang Chen, Weiming Zhang, Chang Liu, Yuang Qi, Jie Zhang, Han Fang, Nenghai Yu

To allow third-parties to autonomously inject watermarks into generated text, we develop a watermarking framework for black-box language model usage scenarios.

Adversarial Robustness Language Modelling +2

Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network

1 code implementation20 Feb 2023 Xiaojian Yuan, Kejiang Chen, Jie Zhang, Weiming Zhang, Nenghai Yu, Yang Zhang

At first, a top-n selection strategy is proposed to provide pseudo-labels for public data, and use pseudo-labels to guide the training of the cGAN.

Generative Adversarial Network Pseudo Label

Towards Understanding and Boosting Adversarial Transferability from a Distribution Perspective

2 code implementations9 Oct 2022 Yao Zhu, Yuefeng Chen, Xiaodan Li, Kejiang Chen, Yuan He, Xiang Tian, Bolun Zheng, Yaowu Chen, Qingming Huang

We conduct comprehensive transferable attacks against multiple DNNs to demonstrate the effectiveness of the proposed method.

Invertible Mask Network for Face Privacy-Preserving

no code implementations19 Apr 2022 Yang Yang, Yiyang Huang, Ming Shi, Kejiang Chen, Weiming Zhang, Nenghai Yu

Then, put the "Mask" face onto the protected face and generate the masked face, in which the masked face is indistinguishable from "Mask" face.

Privacy Preserving

Invertible Image Dataset Protection

no code implementations29 Dec 2021 Kejiang Chen, Xianhan Zeng, Qichao Ying, Sheng Li, Zhenxing Qian, Xinpeng Zhang

We develop a reversible adversarial example generator (RAEG) that introduces slight changes to the images to fool traditional classification models.

Adversarial Defense

Tracing Text Provenance via Context-Aware Lexical Substitution

no code implementations15 Dec 2021 Xi Yang, Jie Zhang, Kejiang Chen, Weiming Zhang, Zehua Ma, Feng Wang, Nenghai Yu

Tracing text provenance can help claim the ownership of text content or identify the malicious users who distribute misleading content like machine-generated fake news.

Optical Character Recognition (OCR) Sentence

Speech Pattern based Black-box Model Watermarking for Automatic Speech Recognition

no code implementations19 Oct 2021 Haozhe Chen, Weiming Zhang, Kunlin Liu, Kejiang Chen, Han Fang, Nenghai Yu

As an effective method for intellectual property (IP) protection, model watermarking technology has been applied on a wide variety of deep neural networks (DNN), including speech classification models.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Adversarial Examples Detection beyond Image Space

1 code implementation23 Feb 2021 Kejiang Chen, Yuefeng Chen, Hang Zhou, Chuan Qin, Xiaofeng Mao, Weiming Zhang, Nenghai Yu

To detect both few-perturbation attacks and large-perturbation attacks, we propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.

LG-GAN: Label Guided Adversarial Network for Flexible Targeted Attack of Point Cloud-based Deep Networks

no code implementations1 Nov 2020 Hang Zhou, Dongdong Chen, Jing Liao, Weiming Zhang, Kejiang Chen, Xiaoyi Dong, Kunlin Liu, Gang Hua, Nenghai Yu

To overcome these shortcomings, this paper proposes a novel label guided adversarial network (LG-GAN) for real-time flexible targeted point cloud attack.

Self-supervised Adversarial Training

1 code implementation15 Nov 2019 Kejiang Chen, Hang Zhou, Yuefeng Chen, Xiaofeng Mao, Yuhong Li, Yuan He, Hui Xue, Weiming Zhang, Nenghai Yu

Recent work has demonstrated that neural networks are vulnerable to adversarial examples.

Self-Supervised Learning

DUP-Net: Denoiser and Upsampler Network for 3D Adversarial Point Clouds Defense

1 code implementation ICCV 2019 Hang Zhou, Kejiang Chen, Weiming Zhang, Han Fang, Wenbo Zhou, Nenghai Yu

We propose a Denoiser and UPsampler Network (DUP-Net) structure as defenses for 3D adversarial point cloud classification, where the two modules reconstruct surface smoothness by dropping or adding points.

Denoising Point Cloud Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.