Search Results for author: Yuekang Li

Found 8 papers, 0 papers with code

Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection

no code implementations15 Apr 2024 Yuxi Li, Yi Liu, Gelei Deng, Ying Zhang, Wenjia Song, Ling Shi, Kailong Wang, Yuekang Li, Yang Liu, Haoyu Wang

We present categorizations of the identified glitch tokens and symptoms exhibited by LLMs when interacting with glitch tokens.

LLM Jailbreak Attack versus Defense Techniques -- A Comprehensive Study

no code implementations21 Feb 2024 Zihao Xu, Yi Liu, Gelei Deng, Yuekang Li, Stjepan Picek

Large Language Models (LLMS) have increasingly become central to generating content with potential societal impacts.

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

no code implementations1 Jan 2024 Haodong Li, Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu, Guoai Xu, Guosheng Xu, Haoyu Wang

In this paper, we introduce a detailed framework designed to detect and assess the presence of content from potentially copyrighted books within the training datasets of LLMs.

Language Modelling Large Language Model +1

Prompt Injection attack against LLM-integrated Applications

no code implementations8 Jun 2023 Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, ZiHao Wang, XiaoFeng Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu

We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection.

Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study

no code implementations23 May 2023 Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, Kailong Wang, Yang Liu

Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts.

Prompt Engineering

Automatic Code Summarization via ChatGPT: How Far Are We?

no code implementations22 May 2023 Weisong Sun, Chunrong Fang, Yudu You, Yun Miao, Yi Liu, Yuekang Li, Gelei Deng, Shenghan Huang, Yuchen Chen, Quanjun Zhang, Hanwei Qian, Yang Liu, Zhenyu Chen

To support software developers in understanding and maintaining programs, various automatic code summarization techniques have been proposed to generate a concise natural language comment for a given code snippet.

Code Summarization

LEOPARD: Identifying Vulnerable Code for Vulnerability Assessment through Program Metrics

no code implementations31 Jan 2019 Xiaoning Du, Bihuan Chen, Yuekang Li, Jianmin Guo, Yaqin Zhou, Yang Liu, Yu Jiang

The latter needs the prior knowledge of known vulnerabilities and can only identify similar but not new types of vulnerabilities.

Software Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.