1 code implementation • NAACL 2022 • Chuyun Deng, Mingxuan Liu, Yue Qin, Jia Zhang, Hai-Xin Duan, Donghong Sun
Adversarial texts help explore vulnerabilities in language models, improve model robustness, and explain their working mechanisms.
1 code implementation • 28 Oct 2022 • Zihan Zhang, Jinfeng Li, Ning Shi, Bo Yuan, Xiangyu Liu, Rong Zhang, Hui Xue, Donghong Sun, Chao Zhang
Despite of the superb performance on a wide range of tasks, pre-trained language models (e. g., BERT) have been proved vulnerable to adversarial texts.