1 code implementation • 10 Oct 2024 • Bofei Gao, Feifan Song, Zhe Yang, Zefan Cai, Yibo Miao, Qingxiu Dong, Lei LI, Chenghao Ma, Liang Chen, Runxin Xu, Zhengyang Tang, Benyou Wang, Daoguang Zan, Shanghaoran Quan, Ge Zhang, Lei Sha, Yichang Zhang, Xuancheng Ren, Tianyu Liu, Baobao Chang
However, existing benchmarks like GSM8K or MATH are now being solved with high accuracy (e. g., OpenAI o1 achieves 94. 8% on MATH dataset), indicating their inadequacy for truly challenging these models.
1 code implementation • 28 May 2024 • Zhengyang Tang, Chenyu Huang, Xin Zheng, Shixi Hu, Zizhuo Wang, Dongdong Ge, Benyou Wang
We apply the data from OR-Instruct to various open-source LLMs of 7b size (termed as ORLMs), resulting in a significantly improved capability for optimization modeling.
no code implementations • 5 Mar 2024 • Zhengyang Tang, Xingxing Zhang, Benyou Wan, Furu Wei
Inspired by the cognitive mechanism in human mathematical learning, it first extracts topics and knowledge points from seed math questions and then build a concept graph, which is subsequently used to generate new math questions.
no code implementations • 20 Feb 2024 • Haoran Li, Qingxiu Dong, Zhengyang Tang, Chaojun Wang, Xingxing Zhang, Haoyang Huang, Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, Yuxian Gu, Xin Cheng, Xun Wang, Si-Qing Chen, Li Dong, Wei Lu, Zhifang Sui, Benyou Wang, Wai Lam, Furu Wei
We introduce Generalized Instruction Tuning (called GLAN), a general and scalable method for instruction tuning of Large Language Models (LLMs).
1 code implementation • 23 Mar 2023 • Juhao Liang, Chen Zhang, Zhengyang Tang, Jie Fu, Dawei Song, Benyou Wang
Built upon the paradigm, we propose a retrieval model with modular prompt tuning named REMOP.
1 code implementation • COLING 2022 • Zhengyang Tang, Benyou Wang, Ting Yao
We believe this work facilitates the industry, as it saves enormous efforts and costs of deployment and increases the utility of computing resources.