Search Results for author: Zhenhong Zhou

Found 2 papers, 0 papers with code

Speak Out of Turn: Safety Vulnerability of Large Language Models in Multi-turn Dialogue

no code implementations27 Feb 2024 Zhenhong Zhou, Jiuyang Xiang, Haopeng Chen, Quan Liu, Zherui Li, Sen Su

Large Language Models (LLMs) have been demonstrated to generate illegal or unethical responses, particularly when subjected to "jailbreak."

Cannot find the paper you are looking for? You can Submit a new open access paper.