no code implementations • 29 Feb 2024 • Zexiong Ma, Shengnan An, Bing Xie, Zeqi Lin
However, the performance remains unsatisfactory in generating library-oriented code, especially for the libraries not present in the training data of LLMs.
1 code implementation • 31 Oct 2023 • Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, Weizhu Chen
To further improve their reasoning capabilities, this work explores whether LLMs can LEarn from MistAkes (LEMA), akin to the human learning process.
no code implementations • 23 May 2023 • Shengnan An, Bo Zhou, Zeqi Lin, Qiang Fu, Bei Chen, Nanning Zheng, Weizhu Chen, Jian-Guang Lou
Few-shot selection -- selecting appropriate examples for each test instance separately -- is important for in-context learning.
no code implementations • 8 May 2023 • Shengnan An, Zeqi Lin, Qiang Fu, Bei Chen, Nanning Zheng, Jian-Guang Lou, Dongmei Zhang
Compositional generalization--understanding unseen combinations of seen primitives--is an essential reasoning capability in human intelligence.
1 code implementation • 23 Feb 2023 • Shengnan An, Zeqi Lin, Bei Chen, Qiang Fu, Nanning Zheng, Jian-Guang Lou
Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.
1 code implementation • 31 Oct 2022 • Daoguang Zan, Bei Chen, Zeqi Lin, Bei guan, Yongji Wang, Jian-Guang Lou
In this paper, we investigate how to equip pre-trained language models with the ability of code generation for private libraries.
1 code implementation • 21 Jul 2022 • Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, Weizhu Chen
A natural way to evaluate the quality and correctness of a code solution is to run it against a set of test cases, but the manual creation of such test cases is often costly and time-consuming.
Ranked #1 on Code Generation on APPS (Introductory Pass@1 metric)
1 code implementation • 14 Jun 2022 • Daoguang Zan, Bei Chen, Dejian Yang, Zeqi Lin, Minsu Kim, Bei guan, Yongji Wang, Weizhu Chen, Jian-Guang Lou
Usually, expensive text-code paired data is essential for training a code generation model.
Ranked #121 on Code Generation on HumanEval
no code implementations • 6 Jun 2022 • Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, Weizhu Chen
Few-shot learning is a challenging task that requires language models to generalize from limited examples.
Ranked #49 on Arithmetic Reasoning on GSM8K
no code implementations • 7 Mar 2022 • Shengnan An, Yifei Li, Zeqi Lin, Qian Liu, Bei Chen, Qiang Fu, Weizhu Chen, Nanning Zheng, Jian-Guang Lou
This motivates us to propose input-tuning, which fine-tunes both the continuous prompts and the input representations, leading to a more effective way to adapt unfamiliar inputs to frozen PLMs.
1 code implementation • 27 Jan 2022 • Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Qiang Fu, Yan Gao, Jian-Guang Lou, Weizhu Chen
Reasoning over natural language is a long-standing goal for the research community.
Ranked #2 on Question Answering on DROP Test (using extra training data)
1 code implementation • ICLR 2022 • Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou
TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and high-quality synthetic corpus.
Ranked #1 on Semantic Parsing on WikiSQL (Denotation accuracy (test) metric)
2 code implementations • Findings (ACL) 2021 • Chenyao Liu, Shengnan An, Zeqi Lin, Qian Liu, Bei Chen, Jian-Guang Lou, Lijie Wen, Nanning Zheng, Dongmei Zhang
In this paper, we propose LeAR, an end-to-end neural model to learn algebraic recombination for compositional generalization.
Ranked #2 on Semantic Parsing on CFQ
no code implementations • 13 Dec 2020 • Yinuo Guo, Zeqi Lin, Jian-Guang Lou, Dongmei Zhang
Experiments on Geo, ComplexWebQuestions, and Formulas show that our framework can consistently improve performances of neural semantic parsers in different domains.
no code implementations • 8 Dec 2020 • Yinuo Guo, Hualei Zhu, Zeqi Lin, Bei Chen, Jian-Guang Lou, Dongmei Zhang
Human intelligence exhibits compositional generalization (i. e., the capacity to understand and produce unseen combinations of seen components), but current neural seq2seq models lack such ability.
no code implementations • NeurIPS 2020 • Yinuo Guo, Zeqi Lin, Jian-Guang Lou, Dongmei Zhang
We formalize human language understanding as a structured prediction task where the output is a partially ordered set (poset).
Ranked #4 on Semantic Parsing on CFQ
1 code implementation • NeurIPS 2020 • Qian Liu, Shengnan An, Jian-Guang Lou, Bei Chen, Zeqi Lin, Yan Gao, Bin Zhou, Nanning Zheng, Dongmei Zhang
Compositional generalization is a basic and essential intellective capability of human beings, which allows us to recombine known parts readily.