Code Repair

5 papers with code • 1 benchmarks • 4 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation

microsoft/CodeXGLUE 9 Feb 2021

Benchmark datasets have a significant impact on accelerating research in programming language tasks.

Learning Performance-Improving Code Edits

madaan/pie-perf 15 Feb 2023

In this paper, we investigate the ability of large language models (LLMs) to suggest functionally correct, performance improving code edits.

OctoPack: Instruction Tuning Code Large Language Models

bigcode-project/octopack 14 Aug 2023

We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46. 2% pass@1).

MACER: A Modular Framework for Accelerated Compilation Error Repair

purushottamkar/macer 28 May 2020

Automated compilation error repair, the problem of suggesting fixes to buggy programs that fail to compile, has generated significant interest in recent years.

Break-It-Fix-It: Unsupervised Learning for Program Repair

michiyasunaga/bifi 11 Jun 2021

To bridge this gap, we propose a new training approach, Break-It-Fix-It (BIFI), which has two key ideas: (i) we use the critic to check a fixer's output on real bad inputs and add good (fixed) outputs to the training data, and (ii) we train a breaker to generate realistic bad code from good code.