Bug fixing

18 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

FixEval: Execution-based Evaluation of Program Fixes for Programming Problems

mahimanzum/fixeval 15 Jun 2022

To address this issue, we introduce FixEval, a benchmark comprising of buggy code submissions to competitive programming problems and their corresponding fixes.

CoditT5: Pretraining for Source Code and Natural Language Editing

engineeringsoftware/coditt5 10 Aug 2022

Pretrained language models have been shown to be effective in many software-related generation tasks; however, they are not well-suited for editing tasks as they are not designed to reason about edits.

ADPTriage: Approximate Dynamic Programming for Bug Triage

hadijahanshahi/adptriage 2 Nov 2022

In this study, we develop a Markov decision process (MDP) model for an online bug triage task.

Automating Code-Related Tasks Through Transformers: The Impact of Pre-training

rosaliatufano/impact_pre-training 8 Feb 2023

Then, we pre-train 32 transformers using both (i) generic pre-training objectives usually adopted in SE; and (ii) pre-training objectives tailored to specific code-related tasks subject of our experimentation, namely bug-fixing, code summarization, and code completion.

Bug Characterization in Machine Learning-based Systems

ml-bugs-2022/replication-package 26 Jul 2023

Based on our results, fixing ML bugs are more costly and ML components are more error-prone, compared to non-ML bugs and non-ML components respectively.

AutoCodeRover: Autonomous Program Improvement

nus-apr/auto-code-rover 8 Apr 2024

Recent progress in Large Language Models (LLMs) has significantly impacted the development process, where developers can use LLM-based programming assistants to achieve automated coding.

Unraveling Code Clone Dynamics in Deep Learning Frameworks

mia1q/code-clone-dl-frameworks 25 Apr 2024

We empirically analyze code clones in nine popular DL frameworks, i. e., TensorFlow, Paddle, PyTorch, Aesara, Ray, MXNet, Keras, Jax and BentoML, to investigate (1) the characteristics of the long-term code cloning evolution over releases in each framework, (2) the short-term, i. e., within-release, code cloning patterns and their influence on the long-term trends, and (3) the file-level code clones within the DL frameworks.