Search Results for author: Shangwen Wang

Found 7 papers, 4 papers with code

When Neural Code Completion Models Size up the Situation: Attaining Cheaper and Faster Completion through Dynamic Model Inference

1 code implementation18 Jan 2024 Zhensu Sun, Xiaoning Du, Fu Song, Shangwen Wang, Li Li

These findings motivate our exploration of dynamic inference in code completion and inspire us to enhance it with a decision-making mechanism that stops the generation of incorrect code.

Code Completion Decision Making

An Extensive Study on Adversarial Attack against Pre-trained Models of Code

1 code implementation13 Nov 2023 Xiaohu Du, Ming Wen, Zichao Wei, Shangwen Wang, Hai Jin

Although several approaches have been proposed to generate adversarial examples for PTMC, the effectiveness and efficiency of such approaches, especially on different code intelligence tasks, has not been well understood.

Adversarial Attack

Don't Complete It! Preventing Unhelpful Code Completion for Productive and Sustainable Neural Code Completion Systems

no code implementations13 Sep 2022 Zhensu Sun, Xiaoning Du, Fu Song, Shangwen Wang, Mingze Ni, Li Li

The experimental results show that the proposed estimator helps save 23. 3% of computational cost measured in floating-point operations for the code completion systems, and 80. 2% of rejected prompts lead to unhelpful completion

Code Completion

Is this Change the Answer to that Problem? Correlating Descriptions of Bug and Code Changes for Evaluating Patch Correctness

1 code implementation8 Aug 2022 Haoye Tian, Xunzhu Tang, Andrew Habib, Shangwen Wang, Kui Liu, Xin Xia, Jacques Klein, Tegawendé F. Bissyandé

To tackle this problem, our intuition is that natural language processing can provide the necessary representations and models for assessing the semantic correlation between a bug (question) and a patch (answer).

Question Answering

Attention Please: Consider Mockito when Evaluating Newly Proposed Automated Program Repair Techniques

no code implementations13 Dec 2018 Shangwen Wang, Ming Wen, Xiaoguang Mao, Deheng Yang

Our findings show that: 1) Mockito bugs are not more complex for repairing compared with bugs from other projects; 2) the bugs repaired by the state-of-the-art tools share the same repair patterns compared with those patterns required to repair Mockito bugs; however, 3) the state-of-the-art tools perform poorly on Mockito bugs (Nopol can only correctly fix one bug while SimFix and CapGen cannot fix any bug in Mockito even if all the buggy locations have been exposed).

Software Engineering

Cannot find the paper you are looking for? You can Submit a new open access paper.