Search Results for author: Zeqi Lin

Found 17 papers, 9 papers with code

Compositional API Recommendation for Library-Oriented Code Generation

no code implementations29 Feb 2024 Zexiong Ma, Shengnan An, Bing Xie, Zeqi Lin

However, the performance remains unsatisfactory in generating library-oriented code, especially for the libraries not present in the training data of LLMs.

Library-Oriented Code Generation

Learning From Mistakes Makes LLM Better Reasoner

1 code implementation31 Oct 2023 Shengnan An, Zexiong Ma, Zeqi Lin, Nanning Zheng, Jian-Guang Lou, Weizhu Chen

To further improve their reasoning capabilities, this work explores whether LLMs can LEarn from MistAkes (LEMA), akin to the human learning process.

GSM8K Math +1

Skill-Based Few-Shot Selection for In-Context Learning

no code implementations23 May 2023 Shengnan An, Bo Zhou, Zeqi Lin, Qiang Fu, Bei Chen, Nanning Zheng, Weizhu Chen, Jian-Guang Lou

Few-shot selection -- selecting appropriate examples for each test instance separately -- is important for in-context learning.

In-Context Learning Semantic Parsing +1

How Do In-Context Examples Affect Compositional Generalization?

no code implementations8 May 2023 Shengnan An, Zeqi Lin, Qiang Fu, Bei Chen, Nanning Zheng, Jian-Guang Lou, Dongmei Zhang

Compositional generalization--understanding unseen combinations of seen primitives--is an essential reasoning capability in human intelligence.

In-Context Learning

Does Deep Learning Learn to Abstract? A Systematic Probing Framework

1 code implementation23 Feb 2023 Shengnan An, Zeqi Lin, Bei Chen, Qiang Fu, Nanning Zheng, Jian-Guang Lou

Abstraction is a desirable capability for deep learning models, which means to induce abstract concepts from concrete instances and flexibly apply them beyond the learning context.

When Language Model Meets Private Library

1 code implementation31 Oct 2022 Daoguang Zan, Bei Chen, Zeqi Lin, Bei guan, Yongji Wang, Jian-Guang Lou

In this paper, we investigate how to equip pre-trained language models with the ability of code generation for private libraries.

Code Generation Language Modelling +1

CodeT: Code Generation with Generated Tests

1 code implementation21 Jul 2022 Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, Weizhu Chen

A natural way to evaluate the quality and correctness of a code solution is to run it against a set of test cases, but the manual creation of such test cases is often costly and time-consuming.

 Ranked #1 on Code Generation on APPS (Introductory Pass@1 metric)

Code Generation

Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models

no code implementations7 Mar 2022 Shengnan An, Yifei Li, Zeqi Lin, Qian Liu, Bei Chen, Qiang Fu, Weizhu Chen, Nanning Zheng, Jian-Guang Lou

This motivates us to propose input-tuning, which fine-tunes both the continuous prompts and the input representations, leading to a more effective way to adapt unfamiliar inputs to frozen PLMs.

Language Modelling Natural Language Understanding +1

Reasoning Like Program Executors

1 code implementation27 Jan 2022 Xinyu Pi, Qian Liu, Bei Chen, Morteza Ziyadi, Zeqi Lin, Qiang Fu, Yan Gao, Jian-Guang Lou, Weizhu Chen

Reasoning over natural language is a long-standing goal for the research community.

Ranked #2 on Question Answering on DROP Test (using extra training data)

Logical Reasoning Math +1

TAPEX: Table Pre-training via Learning a Neural SQL Executor

1 code implementation ICLR 2022 Qian Liu, Bei Chen, Jiaqi Guo, Morteza Ziyadi, Zeqi Lin, Weizhu Chen, Jian-Guang Lou

TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and high-quality synthetic corpus.

 Ranked #1 on Semantic Parsing on WikiSQL (Denotation accuracy (test) metric)

Language Modelling Semantic Parsing +1

Iterative Utterance Segmentation for Neural Semantic Parsing

no code implementations13 Dec 2020 Yinuo Guo, Zeqi Lin, Jian-Guang Lou, Dongmei Zhang

Experiments on Geo, ComplexWebQuestions, and Formulas show that our framework can consistently improve performances of neural semantic parsers in different domains.

Segmentation Semantic Parsing

Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization

no code implementations8 Dec 2020 Yinuo Guo, Hualei Zhu, Zeqi Lin, Bei Chen, Jian-Guang Lou, Dongmei Zhang

Human intelligence exhibits compositional generalization (i. e., the capacity to understand and produce unseen combinations of seen components), but current neural seq2seq models lack such ability.

Translation

Compositional Generalization by Learning Analytical Expressions

1 code implementation NeurIPS 2020 Qian Liu, Shengnan An, Jian-Guang Lou, Bei Chen, Zeqi Lin, Yan Gao, Bin Zhou, Nanning Zheng, Dongmei Zhang

Compositional generalization is a basic and essential intellective capability of human beings, which allows us to recombine known parts readily.

Hierarchical Reinforcement Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.