Search Results for author: Yingwei Ma

Found 11 papers, 7 papers with code

LLMs as Continuous Learners: Improving the Reproduction of Defective Code in Software Issues

no code implementations21 Nov 2024 Yalan Lin, Yingwei Ma, Rongyu Cao, Binhua Li, Fei Huang, Xiaodong Gu, Yongbin Li

Reproducing buggy code is the first and crucially important step in issue resolving, as it aids in identifying the underlying problems and validating that generated patches resolve the problem.

UTMath: Math Evaluation with Unit Test via Reasoning-to-Coding Thoughts

1 code implementation11 Nov 2024 Bo Yang, Qingping Yang, Yingwei Ma, Runtao Liu

The evaluation of mathematical reasoning capabilities is essential for advancing Artificial General Intelligence (AGI).

Code Generation GSM8K +2

Lingma SWE-GPT: An Open Development-Process-Centric Language Model for Automated Software Improvement

1 code implementation1 Nov 2024 Yingwei Ma, Rongyu Cao, Yongchang Cao, Yue Zhang, Jue Chen, Yibo Liu, Yuchen Liu, Binhua Li, Fei Huang, Yongbin Li

The results demonstrate that Lingma SWE-GPT 72B successfully resolves 30. 20% of the GitHub issues, marking a significant improvement in automatic issue resolution (22. 76% relative improvement compared to Llama 3. 1 405B), approaching the performance of closed-source models (31. 80\% issues of GPT-4o resolved).

Language Modeling Language Modelling

Codev-Bench: How Do LLMs Understand Developer-Centric Code Completion?

1 code implementation2 Oct 2024 Zhenyu Pan, Rongyu Cao, Yongchang Cao, Yingwei Ma, Binhua Li, Fei Huang, Han Liu, Yongbin Li

Code completion, a key downstream task in code generation, is one of the most frequent and impactful methods for enhancing developer productivity in software development.

Code Completion Code Generation

In-Context Transfer Learning: Demonstration Synthesis by Transferring Similar Tasks

1 code implementation2 Oct 2024 Dingzirui Wang, Xuanliang Zhang, Qiguang Chen, Longxu Dou, Xiao Xu, Rongyu Cao, Yingwei Ma, Qingfu Zhu, Wanxiang Che, Binhua Li, Fei Huang, Yongbin Li

To address this, inspired by transfer learning, we propose In-Context Transfer Learning (ICTL), which synthesizes target task demonstrations by transferring labeled demonstrations from similar source tasks.

In-Context Learning Transfer Learning

Scaling Offline Model-Based RL via Jointly-Optimized World-Action Model Pretraining

1 code implementation1 Oct 2024 Jie Cheng, Ruixi Qiao, Gang Xiong, Qinghai Miao, Yingwei Ma, Binhua Li, Yongbin Li, Yisheng Lv

Experimental results indicate that our largest agent, with 150 million parameters, achieves 78. 9% human-level performance on pretrained games using only 10% subsampled offline data, outperforming existing state-of-the-art large-scale offline RL baselines by 31. 6% on averange.

Atari Games model +3

How to Understand Whole Software Repository?

no code implementations3 Jun 2024 Yingwei Ma, Qingping Yang, Rongyu Cao, Binhua Li, Fei Huang, Yongbin Li

Specifically, we first condense the critical information of the whole repository into the repository knowledge graph in a top-to-down mode to decrease the complexity of repository.

Language Modelling Large Language Model

End-to-end Learnable Clustering for Intent Learning in Recommendation

2 code implementations11 Jan 2024 Yue Liu, Shihao Zhu, Jun Xia, Yingwei Ma, Jian Ma, Xinwang Liu, Shengju Yu, Kejun Zhang, Wenliang Zhong

Concretely, we encode user behavior sequences and initialize the cluster centers (latent intents) as learnable neurons.

Clustering Contrastive Learning +2

Bridging Code Semantic and LLMs: Semantic Chain-of-Thought Prompting for Code Generation

no code implementations16 Oct 2023 Yingwei Ma, Yue Yu, Shanshan Li, Yu Jiang, Yong Guo, Yuanliang Zhang, Yutao Xie, Xiangke Liao

Meanwhile, while traditional techniques leveraging such semantic information require complex static or dynamic code analysis to obtain features such as data flow and control flow, SeCoT demonstrates that this process can be fully automated via the intrinsic capabilities of LLMs (i. e., in-context learning), while being generalizable and applicable to challenging domains.

Code Generation HumanEval +1

At Which Training Stage Does Code Data Help LLMs Reasoning?

1 code implementation28 Sep 2023 Yingwei Ma, Yue Liu, Yue Yu, Yuanliang Zhang, Yu Jiang, Changjian Wang, Shanshan Li

Inspired by the great success of code data in training LLMs, we naturally wonder at which training stage introducing code data can really help LLMs reasoning.

Question Answering

Language-Enhanced Session-Based Recommendation with Decoupled Contrastive Learning

no code implementations20 Jul 2023 Zhipeng Zhang, Piao Tong, Yingwei Ma, Qiao Liu, Xujiang Liu, Xu Luo

Furthermore, we introduce a novel Decoupled Contrastive Learning method to enhance the effectiveness of the language representation.

Contrastive Learning Retrieval +1

Cannot find the paper you are looking for? You can Submit a new open access paper.