Search Results for author: Chunqiu Steven Xia

Found 7 papers, 4 papers with code

Top Leaderboard Ranking = Top Coding Proficiency, Always? EvoEval: Evolving Coding Benchmarks via LLM

1 code implementation28 Mar 2024 Chunqiu Steven Xia, Yinlin Deng, Lingming Zhang

Such limitations inevitably lead us to inquire: Is the leaderboard performance on existing benchmarks reliable and comprehensive enough to measure the program synthesis ability of LLMs?

Code Generation Instruction Following +1

Copiloting the Copilots: Fusing Large Language Models with Completion Engines for Automated Program Repair

1 code implementation1 Sep 2023 Yuxiang Wei, Chunqiu Steven Xia, Lingming Zhang

Therefore, we propose Repilot, a general code generation framework to further copilot the AI "copilots" (i. e., LLMs) by synthesizing more valid patches during the repair process.

Code Generation Program Repair +1

Fuzz4All: Universal Fuzzing with Large Language Models

1 code implementation9 Aug 2023 Chunqiu Steven Xia, Matteo Paltenghi, Jia Le Tian, Michael Pradel, Lingming Zhang

Moreover, the inputs generated by existing fuzzers are often limited to specific features of the input language, and thus can hardly reveal bugs related to other or new features.

Keep the Conversation Going: Fixing 162 out of 337 bugs for $0.42 each using ChatGPT

no code implementations1 Apr 2023 Chunqiu Steven Xia, Lingming Zhang

For earlier patches that failed to pass all tests, we combine the incorrect patches with their corresponding relevant test failure information to construct a new prompt for the LLM to generate the next patch.

Program Repair

Revisiting the Plastic Surgery Hypothesis via Large Language Models

no code implementations18 Mar 2023 Chunqiu Steven Xia, Yifeng Ding, Lingming Zhang

Traditional APR tools have largely leveraged the plastic surgery hypothesis by designing manual or heuristic-based approaches to exploit such existing code ingredients.

Program Repair

Conversational Automated Program Repair

no code implementations30 Jan 2023 Chunqiu Steven Xia, Lingming Zhang

As such, we leverage the long-term context window of LLMs to not only avoid generating previously incorrect patches but also incorporate validation feedback to help the model understand the semantic meaning of the program under test.

Program Repair

Cannot find the paper you are looking for? You can Submit a new open access paper.