Search Results for author: Wendao Yao

Found 1 papers, 1 papers with code

Dialogue Injection Attack: Jailbreaking LLMs through Context Manipulation

1 code implementation11 Mar 2025 Wenlong Meng, Fan Zhang, Wendao Yao, Zhenyuan Guo, Yuwei Li, Chengkun Wei, Wenzhi Chen

Our experiments show that DIA achieves state-of-the-art attack success rates on recent LLMs, including Llama-3. 1 and GPT-4o.

Cannot find the paper you are looking for? You can Submit a new open access paper.