2 code implementations • 25 Mar 2024 • Daoguang Zan, Ailun Yu, Wei Liu, Dong Chen, Bo Shen, Wei Li, Yafen Yao, Yongshun Gong, Xiaolin Chen, Bei guan, Zhiguang Yang, Yongji Wang, Qianxiang Wang, Lizhen Cui
For feedback-based evaluation, we develop a VSCode plugin for CodeS and engage 30 participants in conducting empirical studies.
1 code implementation • 25 Jan 2024 • Wei Li, Daoguang Zan, Bei guan, Ailun Yu, Xiaolin Chen, Yongji Wang
Code large language models (Code LLMs) have demonstrated remarkable performance in code generation.
1 code implementation • 31 Aug 2023 • Daoguang Zan, Ailun Yu, Bo Shen, Jiaxin Zhang, Taihong Chen, Bing Geng, Bei Chen, Jichuan Ji, Yafen Yao, Yongji Wang, Qianxiang Wang
Results demonstrate that programming languages can significantly improve each other.
no code implementations • 27 Jul 2023 • Bo Shen, Jiaxin Zhang, Taihong Chen, Daoguang Zan, Bing Geng, An Fu, Muhan Zeng, Ailun Yu, Jichuan Ji, Jingyang Zhao, Yuenan Guo, Qianxiang Wang
In this paper, we propose a novel RRTF (Rank Responses to align Test&Teacher Feedback) framework, which can effectively and efficiently boost pre-trained large language models for code generation.
Ranked #33 on Code Generation on HumanEval