Search Results for author: Lida Zhao

Found 1 papers, 0 papers with code

Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study

no code implementations23 May 2023 Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, Kailong Wang, Yang Liu

Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts.

Prompt Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.