Search Results for author: Mingze Ni

Found 5 papers, 3 papers with code

Reversible Jump Attack to Textual Classifiers with Modification Reduction

1 code implementation21 Mar 2024 Mingze Ni, Zhensu Sun, Wei Liu

Recent studies on adversarial examples expose vulnerabilities of natural language processing (NLP) models.

AICAttack: Adversarial Image Captioning Attack with Attention-Based Optimization

no code implementations19 Feb 2024 Jiyao Li, Mingze Ni, Yifei Dong, Tianqing Zhu, Wei Liu

At the intersection of CV and NLP is the problem of image captioning, where the related models' robustness against adversarial attacks has not been well studied.

Adversarial Attack Image Captioning

Frauds Bargain Attack: Generating Adversarial Text Samples via Word Manipulation Process

1 code implementation1 Mar 2023 Mingze Ni, Zhensu Sun, Wei Liu

In response, this study proposes a new method called the Fraud's Bargain Attack (FBA), which uses a randomization mechanism to expand the search space and produce high-quality adversarial examples with a higher probability of success.

Adversarial Text Sentence

Don't Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion Systems

no code implementations13 Sep 2022 Zhensu Sun, Xiaoning Du, Fu Song, Shangwen Wang, Mingze Ni, Li Li

The experimental results show that the proposed estimator helps save 23. 3% of computational cost measured in floating-point operations for the code completion systems, and 80. 2% of rejected prompts lead to unhelpful completion

Code Completion

CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning

1 code implementation25 Oct 2021 Zhensu Sun, Xiaoning Du, Fu Song, Mingze Ni, Li Li

Github Copilot, trained on billions of lines of public code, has recently become the buzzword in the computer science research and practice community.

Data Poisoning

Cannot find the paper you are looking for? You can Submit a new open access paper.