Search Results for author: YunFei Zhao

Found 8 papers, 5 papers with code

EvoCodeBench: An Evolving Code Generation Benchmark with Domain-Specific Evaluations

no code implementations30 Oct 2024 Jia Li, Ge Li, Xuanming Zhang, YunFei Zhao, Yihong Dong, Zhi Jin, Binhua Li, Fei Huang, Yongbin Li

These evaluations help practitioners select superior LLMs in specific domains and discover the shortcomings of existing LLMs.

Code Generation Fairness

DevEval: Evaluating Code Generation in Practical Software Projects

no code implementations12 Jan 2024 Jia Li, Ge Li, YunFei Zhao, Yongmin Li, Zhi Jin, Hao Zhu, Huanyu Liu, Kaibo Liu, Lecheng Wang, Zheng Fang, Lanshen Wang, Jiazheng Ding, Xuanming Zhang, Yihong Dong, Yuqi Zhu, Bin Gu, Mengfei Yang

Compared to previous benchmarks, DevEval aligns to practical projects in multiple dimensions, e. g., real program distributions, sufficient dependencies, and enough-scale project contexts.

Code Generation

Hot or Cold? Adaptive Temperature Sampling for Code Generation with Large Language Models

1 code implementation6 Sep 2023 Yuqi Zhu, Ge Li, YunFei Zhao, Jia Li, Zhi Jin, Hong Mei

With an analysis of loss distributions of code tokens, we find that code tokens can be divided into two categories: challenging tokens that are difficult to predict and confident tokens that can be easily inferred.

Code Generation

Cannot find the paper you are looking for? You can Submit a new open access paper.