Search Results for author: Kailong Wang

Found 13 papers, 8 papers with code

Virtual Array for Dual Function MIMO Radar Communication Systems using OTFS Waveforms

no code implementations14 Nov 2024 Kailong Wang, Athina Petropulu

The transmit antennas can use the Doppler-delay (DD) domain bins in a shared fashion.

Efficient and Effective Universal Adversarial Attack against Vision-Language Pre-training Models

no code implementations15 Oct 2024 Fan Yang, Yihao Huang, Kailong Wang, Ling Shi, Geguang Pu, Yang Liu, Haoyu Wang

Vision-language pre-training (VLP) models, trained on large-scale image-text pairs, have become widely used across a variety of downstream vision-and-language (V+L) tasks.

Adversarial Attack Data Augmentation

GlitchProber: Advancing Effective Detection and Mitigation of Glitch Tokens in Large Language Models

1 code implementation9 Aug 2024 Zhibo Zhang, Wuxia Bai, Yuxi Li, Mark Huasong Meng, Kailong Wang, Ling Shi, Li Li, Jun Wang, Haoyu Wang

In this work, we aim to enhance the understanding of glitch tokens and propose techniques for their detection and mitigation.

NeuSemSlice: Towards Effective DNN Model Maintenance via Neuron-level Semantic Slicing

no code implementations26 Jul 2024 Shide Zhou, Tianlin Li, Yihao Huang, Ling Shi, Kailong Wang, Yang Liu, Haoyu Wang

In this work, we implement NeuSemSlice, a novel framework that introduces the semantic slicing technique to effectively identify critical neuron-level semantic components in DNN models for semantic-aware model maintenance tasks.

Model Compression Semantic Similarity +1

Continuous Embedding Attacks via Clipped Inputs in Jailbreaking Large Language Models

1 code implementation16 Jul 2024 Zihao Xu, Yi Liu, Gelei Deng, Kailong Wang, Yuekang Li, Ling Shi, Stjepan Picek

Security concerns for large language models (LLMs) have recently escalated, focusing on thwarting jailbreaking attempts in discrete prompts.

Lockpicking LLMs: A Logit-Based Jailbreak Using Token-level Manipulation

1 code implementation20 May 2024 Yuxi Li, Yi Liu, Yuekang Li, Ling Shi, Gelei Deng, Shengquan Chen, Kailong Wang

Large language models (LLMs) have transformed the field of natural language processing, but they remain susceptible to jailbreaking attacks that exploit their capabilities to generate unintended and potentially harmful content.

Large Language Models for Cyber Security: A Systematic Literature Review

1 code implementation8 May 2024 HanXiang Xu, ShenAo Wang, Ningke Li, Kailong Wang, Yanjie Zhao, Kai Chen, Ting Yu, Yang Liu, Haoyu Wang

Overall, our survey provides a comprehensive overview of the current state-of-the-art in LLM4Security and identifies several promising directions for future research.

Explainable Models Malware Analysis +4

Glitch Tokens in Large Language Models: Categorization Taxonomy and Effective Detection

no code implementations15 Apr 2024 Yuxi Li, Yi Liu, Gelei Deng, Ying Zhang, Wenjia Song, Ling Shi, Kailong Wang, Yuekang Li, Yang Liu, Haoyu Wang

We present categorizations of the identified glitch tokens and symptoms exhibited by LLMs when interacting with glitch tokens.

Digger: Detecting Copyright Content Mis-usage in Large Language Model Training

no code implementations1 Jan 2024 Haodong Li, Gelei Deng, Yi Liu, Kailong Wang, Yuekang Li, Tianwei Zhang, Yang Liu, Guoai Xu, Guosheng Xu, Haoyu Wang

In this paper, we introduce a detailed framework designed to detect and assess the presence of content from potentially copyrighted books within the training datasets of LLMs.

Language Modelling Large Language Model +1

Large Language Models for Software Engineering: A Systematic Literature Review

1 code implementation21 Aug 2023 Xinyi Hou, Yanjie Zhao, Yue Liu, Zhou Yang, Kailong Wang, Li Li, Xiapu Luo, David Lo, John Grundy, Haoyu Wang

Nevertheless, a comprehensive understanding of the application, effects, and possible limitations of LLMs on SE is still in its early stages.

Systematic Literature Review

Prompt Injection attack against LLM-integrated Applications

1 code implementation8 Jun 2023 Yi Liu, Gelei Deng, Yuekang Li, Kailong Wang, ZiHao Wang, XiaoFeng Wang, Tianwei Zhang, Yepang Liu, Haoyu Wang, Yan Zheng, Yang Liu

We deploy HouYi on 36 actual LLM-integrated applications and discern 31 applications susceptible to prompt injection.

Jailbreaking ChatGPT via Prompt Engineering: An Empirical Study

2 code implementations23 May 2023 Yi Liu, Gelei Deng, Zhengzi Xu, Yuekang Li, Yaowen Zheng, Ying Zhang, Lida Zhao, Tianwei Zhang, Kailong Wang, Yang Liu

Our study investigates three key research questions: (1) the number of different prompt types that can jailbreak LLMs, (2) the effectiveness of jailbreak prompts in circumventing LLM constraints, and (3) the resilience of ChatGPT against these jailbreak prompts.

Prompt Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.