no code implementations • 16 Oct 2023 • Shuyu Jiang, Xingshu Chen, Rui Tang
In this paper, we introduce an innovative technique for obfuscating harmful instructions: Compositional Instruction Attacks (CIA), which refers to attacking by combination and encapsulation of multiple instructions.
no code implementations • 9 Oct 2023 • Shuyu Jiang, Wenyi Tang, Xingshu Chen, Rui Tanga, Haizhou Wang, Wenxian Wang
To address these limitations, we propose Retrieval-Augmented Unsupervised Counter Narrative Generation (RAUCG) to automatically expand external counter-knowledge and map it into CNs in an unsupervised paradigm.
no code implementations • 5 Mar 2022 • Shuyu Jiang, Dengbiao Tu, Xingshu Chen, Rui Tang, Wenxian Wang, Haizhou Wang
Therefore, we first propose a clue-guided cross-lingual abstractive summarization method to improve the quality of cross-lingual summaries, and then construct a novel hand-written CLS dataset for evaluation.
Abstractive Text Summarization Cross-Lingual Abstractive Summarization +1
no code implementations • 4 Mar 2022 • Zhenxiong Miao, Xingshu Chen, Haizhou Wang, Rui Tang, Zhou Yang, Wenyi Tang
In this paper, we propose an end-to-end method based on community structure and text features for offensive language detection (CT-OLD).
no code implementations • 18 Jun 2021 • Lina Wang, Xingshu Chen, Yulong Wang, Yawei Yue, Yi Zhu, Xuemei Zeng, Wei Wang
Previous works study the adversarial robustness of image classifiers on image level and use all the pixel information in an image indiscriminately, lacking of exploration of regions with different semantic meanings in the pixel space of an image.
no code implementations • ICML Workshop AML 2021 • Xiaolei Liu, Xingshu Chen, Mingyong Yin, Yulong Wang, Teng Hu, Kangyi Ding
We study the problem of audio adversarial example attacks with sparse perturbations.
no code implementations • 26 Aug 2020 • Chunhui Li, Xingshu Chen, Haizhou Wang, Yu Zhang, Peiming Wang
Firstly, we train CAPTCHA synthesizers based on the cycle-GAN to generate some fake samples.
no code implementations • 18 Aug 2020 • Li-Na Wang, Rui Tang, Yawei Yue, Xingshu Chen, Wei Wang, Yi Zhu, Xuemei Zeng
The vulnerability of deep neural networks (DNNs) to adversarial attack, which is an attack that can mislead state-of-the-art classifiers into making an incorrect classification with high confidence by deliberately perturbing the original inputs, raises concerns about the robustness of DNNs to such attacks.