Program Repair
33 papers with code • 3 benchmarks • 8 datasets
Task of teaching ML models to modify an existing program to fix a bug in a given code.
Datasets
Subtasks
Latest papers with no code
RepairAgent: An Autonomous, LLM-Based Agent for Program Repair
Unlike existing deep learning-based approaches, which prompt a model with a fixed prompt or in a fixed feedback loop, our work treats the LLM as an agent capable of autonomously planning and executing actions to fix bugs by invoking suitable tools.
A Study of Vulnerability Repair in JavaScript Programs with Large Language Models
In recent years, JavaScript has become the most widely used programming language, especially in web development.
DeepCode AI Fix: Fixing Security Vulnerabilities with Large Language Models
We show that the task is difficult as it requires the model to learn long-range code relationships, a task that inherently relies on extensive amounts of training data.
A Novel Approach for Automatic Program Repair using Round-Trip Translation with Large Language Models
We investigate whether this correction capability of Large Language Models (LLMs) extends to Automatic Program Repair (APR).
Nova$^+$: Generative Language Models for Binaries
We build Nova$^+$ to further boost Nova using two new pre-training tasks, i. e., optimization generation and optimization level prediction, which are designed to learn binary optimization and align equivalent binaries.
ConDefects: A New Dataset to Address the Data Leakage Concern for LLM-based Fault Localization and Program Repair
With the growing interest on Large Language Models (LLMs) for fault localization and program repair, ensuring the integrity and generalizability of the LLM-based methods becomes paramount.
Enhancing Genetic Improvement Mutations Using Large Language Models
We find that the number of patches passing unit tests is up to 75% higher with LLM-based edits than with standard Insert edits.
Automated Bug Generation in the era of Large Language Models
From the classic software engineering point of view, a hard-to-repair bug differs from the correct code in multiple locations, making it hard to localize and repair.
Program Repair with Minimal Edits Using CodeT5
The experimental results show that the fine-tuned CodeT5 achieves a pass@100 of 91. 95% and an average edit distance of the most similar correct program of 6. 84, which indicates that at least one correct program can be suggested by generating 100 candidate programs.
Frustrated with Code Quality Issues? LLMs can Help!
We present a tool, CORE (short for COde REvisions), architected using a pair of LLMs organized as a duo comprised of a proposer and a ranker.