Program Repair
34 papers with code • 3 benchmarks • 8 datasets
Task of teaching ML models to modify an existing program to fix a bug in a given code.
Datasets
Subtasks
Most implemented papers
Human-In-The-Loop Automatic Program Repair
Our key challenge is to maximize the oracle's accuracy in predicting which tests are bug-exposing given a small budget of queries.
Arachne: Search Based Repair of Deep Neural Networks
The rapid and widespread adoption of Deep Neural Networks (DNNs) has called for ways to test their behaviour, and many testing approaches have successfully revealed misbehaviour of DNNs.
Global Relational Models of Source Code
By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.
CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair
To address these challenges, we propose a new G&V technique—CoCoNuT, which uses ensemble learning on the combination of convolutional neural networks (CNNs) and a new context-aware neural machine translation (NMT) architecture to automatically fix bugs in multiple programming languages.
Robot Action Selection Learning via Layered Dimension Informed Program Synthesis
Action selection policies (ASPs), used to compose low-level robot skills into complex high-level tasks are commonly represented as neural networks (NNs) in the state of the art.
Patching as Translation: the Data and the Metaphor
Given these findings, we demonstrate how a more principled approach to model design, based on our empirical findings and general knowledge of software development, can lead to better solutions.
Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks
More practically, we evaluate these models on the task of learning to execute partial programs, as might arise if using the model as a heuristic function in program synthesis.
CURE: Code-Aware Neural Machine Translation for Automatic Program Repair
Finally, CURE uses a subword tokenization technique to generate a smaller search space that contains more correct fixes.
Unified Pre-training for Program Understanding and Generation
Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models.
Assessing the Effectiveness of Syntactic Structure to Learn Code Edit Representations
In this paper, we elaborate upon this state of the art approach and modify it to represent source code edits.