Search Results for author: Hau Ching Lo

Found 2 papers, 1 papers with code

Isolating Language-Coding from Problem-Solving: Benchmarking LLMs with PseudoEval

no code implementations26 Feb 2025 Jiarong Wu, Songqiang Chen, Jialun Cao, Hau Ching Lo, Shing-Chi Cheung

Existing code generation benchmarks for Large Language Models (LLMs) such as HumanEval and MBPP are designed to study LLMs' end-to-end performance, where the benchmarks feed a problem description in natural language as input and examine the generated code in specific programming languages.

Benchmarking Code Generation +2

CODECLEANER: Elevating Standards with A Robust Data Contamination Mitigation Toolkit

1 code implementation16 Nov 2024 Jialun Cao, Songqiang Chen, Wuqi Zhang, Hau Ching Lo, Shing-Chi Cheung

However, the lack of automated code refactoring tools and scientifically validated refactoring techniques has hampered widespread industrial implementation.

Cannot find the paper you are looking for? You can Submit a new open access paper.