Search Results for author: Yongkang Chen

Found 1 papers, 0 papers with code

Is the System Message Really Important to Jailbreaks in Large Language Models?

no code implementations20 Feb 2024 Xiaotian Zou, Yongkang Chen, Ke Li

To address this question, we conducted experiments in a stable GPT version gpt-3. 5-turbo-0613 to generated jailbreak prompts with varying system messages: short, long, and none.

Evolutionary Algorithms

Cannot find the paper you are looking for? You can Submit a new open access paper.