no code implementations • 26 Feb 2025 • Jiarong Wu, Songqiang Chen, Jialun Cao, Hau Ching Lo, Shing-Chi Cheung
Existing code generation benchmarks for Large Language Models (LLMs) such as HumanEval and MBPP are designed to study LLMs' end-to-end performance, where the benchmarks feed a problem description in natural language as input and examine the generated code in specific programming languages.
1 code implementation • 16 Nov 2024 • Jialun Cao, Songqiang Chen, Wuqi Zhang, Hau Ching Lo, Shing-Chi Cheung
However, the lack of automated code refactoring tools and scientifically validated refactoring techniques has hampered widespread industrial implementation.
no code implementations • 31 Mar 2021 • Xu Chen, Bangguo Yin, Songqiang Chen, Haifeng Li, Tian Xu
The series strategy avoids RS-m inconsistency as inputs are high-resolution large-scale RSIs, and reduces the distribution gap in multi-scale map generation through similar pixel distributions among multi-scale maps.