Code Repair
5 papers with code • 1 benchmarks • 4 datasets
Most implemented papers
CodeXGLUE: A Machine Learning Benchmark Dataset for Code Understanding and Generation
Benchmark datasets have a significant impact on accelerating research in programming language tasks.
Learning Performance-Improving Code Edits
In this paper, we investigate the ability of large language models (LLMs) to suggest functionally correct, performance improving code edits.
OctoPack: Instruction Tuning Code Large Language Models
We benchmark CommitPack against other natural and synthetic code instructions (xP3x, Self-Instruct, OASST) on the 16B parameter StarCoder model, and achieve state-of-the-art performance among models not trained on OpenAI outputs, on the HumanEval Python benchmark (46. 2% pass@1).
MACER: A Modular Framework for Accelerated Compilation Error Repair
Automated compilation error repair, the problem of suggesting fixes to buggy programs that fail to compile, has generated significant interest in recent years.
Break-It-Fix-It: Unsupervised Learning for Program Repair
To bridge this gap, we propose a new training approach, Break-It-Fix-It (BIFI), which has two key ideas: (i) we use the critic to check a fixer's output on real bad inputs and add good (fixed) outputs to the training data, and (ii) we train a breaker to generate realistic bad code from good code.