Search Results for author: Shengfang Zhai

Found 5 papers, 1 papers with code

Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning

1 code implementation7 May 2023 Shengfang Zhai, Yinpeng Dong, Qingni Shen, Shi Pu, Yuejian Fang, Hang Su

To gain a better understanding of the training process and potential risks of text-to-image synthesis, we perform a systematic investigation of backdoor attack on text-to-image diffusion models and propose BadT2I, a general multimodal backdoor attack framework that tampers with image synthesis in diverse semantic levels.

Backdoor Attack backdoor defense +2

Apple of Sodom: Hidden Backdoors in Superior Sentence Embeddings via Contrastive Learning

no code implementations20 Oct 2022 Xiaoyi Chen, Baisong Xin, Shengfang Zhai, Shiqing Ma, Qingni Shen, Zhonghai Wu

This paper finds that contrastive learning can produce superior sentence embeddings for pre-trained models but is also vulnerable to backdoor attacks.

Backdoor Attack Contrastive Learning +3

Kallima: A Clean-label Framework for Textual Backdoor Attacks

no code implementations3 Jun 2022 Xiaoyi Chen, Yinpeng Dong, Zeyu Sun, Shengfang Zhai, Qingni Shen, Zhonghai Wu

Although Deep Neural Network (DNN) has led to unprecedented progress in various natural language processing (NLP) tasks, research shows that deep models are extremely vulnerable to backdoor attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.