Program Repair

20 papers with code • 3 benchmarks • 6 datasets

This task has no description! Would you like to contribute one?

Greatest papers with code

Learning to Execute Programs with Instruction Pointer Attention Graph Neural Networks

google-research/google-research NeurIPS 2020

More practically, we evaluate these models on the task of learning to execute partial programs, as might arise if using the model as a heuristic function in program synthesis.

Code Completion Learning to Execute +2

Learning and Evaluating Contextual Embedding of Source Code

google-research/google-research ICML 2020

We fine-tune CuBERT on our benchmark tasks, and compare the resulting models to different variants of Word2Vec token embeddings, BiLSTM and Transformer models, as well as published state-of-the-art models, showing that CuBERT outperforms them all, even with shorter training, and with fewer labeled examples.

Contextual Embedding for Source Code Exception type +7

Graph-based, Self-Supervised Program Repair from Diagnostic Feedback

michiyasunaga/DrRepair ICML 2020

Second, we present a self-supervised learning paradigm for program repair that leverages unlabeled programs available online to create a large amount of extra program repair examples, which we use to pre-train our models.

Code Generation Graph Learning +2

Unified Pre-training for Program Understanding and Generation

wasiahmad/PLBART NAACL 2021

Experiments on code summarization in the English language, code generation, and code translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models.

Clone Detection Code Generation +8

Code Generation Based on Deep Learning: a Brief Review

zysszy/TreeGen 15 Jun 2021

Automatic software development has been a research hot spot in the field of software engineering (SE) in the past decade.

Code Completion Code Generation +1

SequenceR: Sequence-to-Sequence Learning for End-to-End Program Repair

kth/SequenceR 24 Dec 2018

This paper presents a novel end-to-end approach to program repair based on sequence-to-sequence learning.

Program Repair

Global Relational Models of Source Code

VHellendoorn/ICLR20-Great ICLR 2020

By studying a popular, non-trivial program repair task, variable-misuse identification, we explore the relative merits of traditional and hybrid model families for code representation.

Variable misuse

Break-It-Fix-It: Unsupervised Learning for Program Repair

michiyasunaga/bifi 11 Jun 2021

To bridge this gap, we propose a new training approach, Break-It-Fix-It (BIFI), which has two key ideas: (i) we use the critic to check a fixer's output on real bad inputs and add good (fixed) outputs to the training data, and (ii) we train a breaker to generate realistic bad code from good code.

Code Repair Data Augmentation +3

Dynamic Neural Program Embedding for Program Repair

keowang/dynamic-program-embedding 20 Nov 2017

Evaluation results show that our new semantic program embedding significantly outperforms the syntactic program embeddings based on token sequences and abstract syntax trees.

Fault localization