Search Results for author: Wenlong Meng

Found 5 papers, 5 papers with code

Dialogue Injection Attack: Jailbreaking LLMs through Context Manipulation

1 code implementation11 Mar 2025 Wenlong Meng, Fan Zhang, Wendao Yao, Zhenyuan Guo, Yuwei Li, Chengkun Wei, Wenzhi Chen

Our experiments show that DIA achieves state-of-the-art attack success rates on recent LLMs, including Llama-3. 1 and GPT-4o.

Be Cautious When Merging Unfamiliar LLMs: A Phishing Model Capable of Stealing Privacy

1 code implementation17 Feb 2025 Zhenyuan Guo, Yi Shi, Wenlong Meng, Chen Gong, Chengkun Wei, Wenzhi Chen

Specifically, we propose PhiMM, a privacy attack approach that trains a phishing model capable of stealing privacy using a crafted privacy phishing instruction dataset.

LMSanitator: Defending Prompt-Tuning Against Task-Agnostic Backdoors

1 code implementation26 Aug 2023 Chengkun Wei, Wenlong Meng, Zhikun Zhang, Min Chen, Minghu Zhao, Wenjing Fang, Lei Wang, Zihui Zhang, Wenzhi Chen

Instead of directly inverting the triggers, LMSanitator aims to invert the predefined attack vectors (pretrained models' output when the input is embedded with triggers) of the task-agnostic backdoors, which achieves much better convergence performance and backdoor detection accuracy.

DPMLBench: Holistic Evaluation of Differentially Private Machine Learning

1 code implementation10 May 2023 Chengkun Wei, Minghu Zhao, Zhikun Zhang, Min Chen, Wenlong Meng, Bo Liu, Yuan Fan, Wenzhi Chen

We also explore some improvements that can maintain model utility and defend against MIAs more effectively.

Image Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.