Search Results for author: Mingwei Liu

Found 6 papers, 2 papers with code

Beyond Functional Correctness: Investigating Coding Style Inconsistencies in Large Language Models

no code implementations29 Jun 2024 Yanlin Wang, Tianyue Jiang, Mingwei Liu, Jiachi Chen, Zibin Zheng

In this paper, we empirically analyze the differences in coding style between the code generated by mainstream Code LLMs and the code written by human developers, and summarize coding style inconsistency taxonomy.

Code Generation

Vul-RAG: Enhancing LLM-based Vulnerability Detection via Knowledge-level RAG

no code implementations17 Jun 2024 Xueying Du, Geng Zheng, Kaixin Wang, Jiayi Feng, Wentai Deng, Mingwei Liu, Bihuan Chen, Xin Peng, Tao Ma, Yiling Lou

In addition, our user study shows that the vulnerability knowledge generated by Vul-RAG can serve as high-quality explanations which can improve the manual detection accuracy from 0. 60 to 0. 77.

RAG Vulnerability Detection

Resolving Crash Bugs via Large Language Models: An Empirical Study

no code implementations16 Dec 2023 Xueying Du, Mingwei Liu, Juntao Li, Hanlin Wang, Xin Peng, Yiling Lou

Evaluating IntDiagSolver on multiple LLMs reveals consistent enhancement in the accuracy of crash bug resolution, including ChatGPT, Claude, and CodeLlama.

Language Modelling Large Language Model

ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation

1 code implementation3 Aug 2023 Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan Chen, Jiayi Feng, Chaofeng Sha, Xin Peng, Yiling Lou

Third, we find that generating the entire class all at once (i. e. holistic generation strategy) is the best generation strategy only for GPT-4 and GPT-3. 5, while method-by-method generation (i. e. incremental and compositional) is better strategies for the other models with limited ability of understanding long instructions and utilizing the middle information.

Class-level Code Generation HumanEval

Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation

no code implementations2 Aug 2023 Zhiqiang Yuan, Junwei Liu, Qiancheng Zi, Mingwei Liu, Xin Peng, Yiling Lou

First, for the zero-shot setting, instructed LLMs are very competitive on code comprehension and generation tasks and sometimes even better than small SOTA models specifically fine-tuned on each downstream task.

Cannot find the paper you are looking for? You can Submit a new open access paper.