Search Results for author: Yiling Lou

Found 6 papers, 4 papers with code

Resolving Crash Bugs via Large Language Models: An Empirical Study

no code implementations16 Dec 2023 Xueying Du, Mingwei Liu, Juntao Li, Hanlin Wang, Xin Peng, Yiling Lou

Evaluating IntDiagSolver on multiple LLMs reveals consistent enhancement in the accuracy of crash bug resolution, including ChatGPT, Claude, and CodeLlama.

Language Modelling Large Language Model

ClassEval: A Manually-Crafted Benchmark for Evaluating LLMs on Class-level Code Generation

1 code implementation3 Aug 2023 Xueying Du, Mingwei Liu, Kaixin Wang, Hanlin Wang, Junwei Liu, Yixuan Chen, Jiayi Feng, Chaofeng Sha, Xin Peng, Yiling Lou

Third, we find that generating the entire class all at once (i. e. holistic generation strategy) is the best generation strategy only for GPT-4 and GPT-3. 5, while method-by-method generation (i. e. incremental and compositional) is better strategies for the other models with limited ability of understanding long instructions and utilizing the middle information.

Class-level Code Generation

Evaluating Instruction-Tuned Large Language Models on Code Comprehension and Generation

no code implementations2 Aug 2023 Zhiqiang Yuan, Junwei Liu, Qiancheng Zi, Mingwei Liu, Xin Peng, Yiling Lou

First, for the zero-shot setting, instructed LLMs are very competitive on code comprehension and generation tasks and sometimes even better than small SOTA models specifically fine-tuned on each downstream task.

KNOD: Domain Knowledge Distilled Tree Decoder for Automated Program Repair

1 code implementation3 Feb 2023 Nan Jiang, Thibaud Lutellier, Yiling Lou, Lin Tan, Dan Goldwasser, Xiangyu Zhang

KNOD has two major novelties, including (1) a novel three-stage tree decoder, which directly generates Abstract Syntax Trees of patched code according to the inherent tree structure, and (2) a novel domain-rule distillation, which leverages syntactic and semantic rules and teacher-student distributions to explicitly inject the domain knowledge into the decoding procedure during both the training and inference phases.

Program Repair

An Empirical Study on Deployment Faults of Deep Learning Based Mobile Applications

1 code implementation13 Jan 2021 Zhenpeng Chen, Huihan Yao, Yiling Lou, Yanbin Cao, Yuanqiang Liu, Haoyu Wang, Xuanzhe Liu

In contrast, faults related to the deployment of DL models on mobile devices (named as deployment faults of mobile DL apps) have not been well studied.

Cannot find the paper you are looking for? You can Submit a new open access paper.