Search Results for author: Xingshu Chen

Found 8 papers, 0 papers with code

Prompt Packer: Deceiving LLMs through Compositional Instruction with Hidden Attacks

no code implementations16 Oct 2023 Shuyu Jiang, Xingshu Chen, Rui Tang

In this paper, we introduce an innovative technique for obfuscating harmful instructions: Compositional Instruction Attacks (CIA), which refers to attacking by combination and encapsulation of multiple instructions.

Ethics

RAUCG: Retrieval-Augmented Unsupervised Counter Narrative Generation for Hate Speech

no code implementations9 Oct 2023 Shuyu Jiang, Wenyi Tang, Xingshu Chen, Rui Tanga, Haizhou Wang, Wenxian Wang

To address these limitations, we propose Retrieval-Augmented Unsupervised Counter Narrative Generation (RAUCG) to automatically expand external counter-knowledge and map it into CNs in an unsupervised paradigm.

Persuasiveness Retrieval +1

ClueGraphSum: Let Key Clues Guide the Cross-Lingual Abstractive Summarization

no code implementations5 Mar 2022 Shuyu Jiang, Dengbiao Tu, Xingshu Chen, Rui Tang, Wenxian Wang, Haizhou Wang

Therefore, we first propose a clue-guided cross-lingual abstractive summarization method to improve the quality of cross-lingual summaries, and then construct a novel hand-written CLS dataset for evaluation.

Abstractive Text Summarization Cross-Lingual Abstractive Summarization +1

Detecting Offensive Language on Social Networks: An End-to-end Detection Method based on Graph Attention Networks

no code implementations4 Mar 2022 Zhenxiong Miao, Xingshu Chen, Haizhou Wang, Rui Tang, Zhou Yang, Wenyi Tang

In this paper, we propose an end-to-end method based on community structure and text features for offensive language detection (CT-OLD).

Graph Attention

Analyzing Adversarial Robustness of Deep Neural Networks in Pixel Space: a Semantic Perspective

no code implementations18 Jun 2021 Lina Wang, Xingshu Chen, Yulong Wang, Yawei Yue, Yi Zhu, Xuemei Zeng, Wei Wang

Previous works study the adversarial robustness of image classifiers on image level and use all the pixel information in an image indiscriminately, lacking of exploration of regions with different semantic meanings in the pixel space of an image.

Adversarial Robustness

Improving adversarial robustness of deep neural networks by using semantic information

no code implementations18 Aug 2020 Li-Na Wang, Rui Tang, Yawei Yue, Xingshu Chen, Wei Wang, Yi Zhu, Xuemei Zeng

The vulnerability of deep neural networks (DNNs) to adversarial attack, which is an attack that can mislead state-of-the-art classifiers into making an incorrect classification with high confidence by deliberately perturbing the original inputs, raises concerns about the robustness of DNNs to such attacks.

Adversarial Attack Adversarial Robustness

Cannot find the paper you are looking for? You can Submit a new open access paper.