1 code implementation • 1 Aug 2023 • Hai Zhu, Zhaoqing Yang, Weiwei Shang, Yuren Wu
Natural language processing models are vulnerable to adversarial examples.
no code implementations • 9 Mar 2023 • Hai Zhu, Qingyang Zhao, Yuren Wu
These adversarial examples are imperceptible to human readers but can mislead models to make the wrong predictions.