1 code implementation • 21 Mar 2024 • Mingze Ni, Zhensu Sun, Wei Liu
Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models.
no code implementations • 19 Feb 2024 • Jiyao Li, Mingze Ni, Yifei Dong, Tianqing Zhu, Wei Liu
At the intersection of CV and NLP is the problem of image captioning, where the related models' robustness against adversarial attacks has not been well studied.
1 code implementation • 1 Mar 2023 • Mingze Ni, Zhensu Sun, Wei Liu
In response, this study proposes a new method called the Fraud's Bargain Attack (FBA), which uses a randomization mechanism to expand the search space and produce high-quality adversarial examples with a higher probability of success.
no code implementations • 13 Sep 2022 • Zhensu Sun, Xiaoning Du, Fu Song, Shangwen Wang, Mingze Ni, Li Li
The experimental results show that the proposed estimator helps save 23. 3% of computational cost measured in floating-point operations for the code completion systems, and 80. 2% of rejected prompts lead to unhelpful completion
1 code implementation • 25 Oct 2021 • Zhensu Sun, Xiaoning Du, Fu Song, Mingze Ni, Li Li
Github Copilot, trained on billions of lines of public code, has recently become the buzzword in the computer science research and practice community.