no code implementations • 29 Nov 2023 • Lujia Shen, Yuwen Pu, Shouling Ji, Changjiang Li, Xuhong Zhang, Chunpeng Ge, Ting Wang
Extensive experiments demonstrate that dynamic attention significantly mitigates the impact of adversarial attacks, improving up to 33\% better performance than previous methods against widely-used adversarial attacks.
no code implementations • 12 Feb 2023 • Lujia Shen, Xuhong Zhang, Shouling Ji, Yuwen Pu, Chunpeng Ge, Xing Yang, Yanghe Feng
TextDefense differs from previous approaches, where it utilizes the target model for detection and thus is attack type agnostic.
1 code implementation • 30 Oct 2021 • Lujia Shen, Shouling Ji, Xuhong Zhang, Jinfeng Li, Jing Chen, Jie Shi, Chengfang Fang, Jianwei Yin, Ting Wang
However, a pre-trained model with backdoor can be a severe threat to the applications.