Search Results for author: Fengqing Jiang

Found 7 papers, 4 papers with code

ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs

1 code implementation19 Feb 2024 Fengqing Jiang, Zhangchen Xu, Luyao Niu, Zhen Xiang, Bhaskar Ramasubramanian, Bo Li, Radha Poovendran

In this paper, we propose a novel ASCII art-based jailbreak attack and introduce a comprehensive benchmark Vision-in-Text Challenge (ViTC) to evaluate the capabilities of LLMs in recognizing prompts that cannot be solely interpreted by semantics.

SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding

1 code implementation14 Feb 2024 Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Bill Yuchen Lin, Radha Poovendran

Our results show that SafeDecoding significantly reduces the attack success rate and harmfulness of jailbreak attacks without compromising the helpfulness of responses to benign user queries.

Chatbot Code Generation

BadChain: Backdoor Chain-of-Thought Prompting for Large Language Models

1 code implementation20 Jan 2024 Zhen Xiang, Fengqing Jiang, Zidi Xiong, Bhaskar Ramasubramanian, Radha Poovendran, Bo Li

Moreover, we show that LLMs endowed with stronger reasoning capabilities exhibit higher susceptibility to BadChain, exemplified by a high average attack success rate of 97. 0% across the six benchmark tasks on GPT-4.

Backdoor Attack

Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning

no code implementations10 Jan 2024 Zhangchen Xu, Fengqing Jiang, Luyao Niu, Jinyuan Jia, Radha Poovendran

Our results show that the global model learned with Brave in the presence of adversaries achieves comparable classification accuracy to a global model trained in the absence of any adversary.

Federated Learning Image Classification +1

Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications

no code implementations7 Nov 2023 Fengqing Jiang, Zhangchen Xu, Luyao Niu, Boxin Wang, Jinyuan Jia, Bo Li, Radha Poovendran

Successful exploits of the identified vulnerabilities result in the users receiving responses tailored to the intent of a threat initiator.

Code Completion

MDTD: A Multi Domain Trojan Detector for Deep Neural Networks

1 code implementation30 Aug 2023 Arezoo Rajabi, Surudhi Asokraj, Fengqing Jiang, Luyao Niu, Bhaskar Ramasubramanian, Jim Ritcey, Radha Poovendran

An adversary carrying out a backdoor attack embeds a predefined perturbation called a trigger into a small subset of input samples and trains the DNN such that the presence of the trigger in the input results in an adversary-desired output class.

Backdoor Attack

Cannot find the paper you are looking for? You can Submit a new open access paper.