Search Results for author: Kailong Wang

Found 6 papers, 2 papers with code

Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection

no code implementations15 Apr 2024 Yuxi Li, Yi Liu, Gelei Deng, Ying Zhang, Wenjia Song, Ling Shi, Kailong Wang, Yuekang Li, Yang Liu, Haoyu Wang

We present categorizations of the identified glitch tokens and symptoms exhibited by LLMs when interacting with glitch tokens.

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

no code implementations1 Jan 2024 Haodong Li, Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu, Guoai Xu, Guosheng Xu, Haoyu Wang

In this paper, we introduce a detailed framework designed to detect and assess the presence of content from potentially copyrighted books within the training datasets of LLMs.

Language Modelling Large Language Model +1

Large Language Models for Software Engineering: A Systematic Literature Review

1 code implementation21 Aug 2023 Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, Haoyu Wang

Nevertheless, a comprehensive understanding of the application, effects, and possible limitations of LLMs on SE is still in its early stages.

Prompt Injection attack against LLM-integrated Applications

no code implementations8 Jun 2023 Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, ZiHao Wang, XiaoFeng Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu

We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection.

Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study

no code implementations23 May 2023 Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, Kailong Wang, Yang Liu

Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts.

Prompt Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.