1 code implementation • EMNLP 2020 • Jiaqi Guo, Qian Liu, Jian-Guang Lou, Zhenwen Li, Xueqing Liu, Tao Xie, Ting Liu
Thus, the impact of meaning representation on semantic parsing is less understood.
no code implementations • 11 Mar 2024 • Linyi Li, Shijie Geng, Zhenwen Li, Yibo He, Hao Yu, Ziyue Hua, Guanghan Ning, Siwei Wang, Tao Xie, Hongxia Yang
Large Language Models for understanding and generating code (code LLMs) have witnessed tremendous progress in recent years.
no code implementations • 4 Jan 2024 • Zhenwen Li, Tao Xie
We propose an automatic test case generation method that first generates a database and then uses LLMs to predict the ground truth, which is the expected execution results of the ground truth SQL query on this database.
no code implementations • 21 Dec 2023 • Zhenwen Li, Jian-Guang Lou, Tao Xie
To address this issue, in this paper, we report our insight that there exists a high similarity between the task of NL2ERM and the increasingly popular task of text-to-SQL, and propose a data transformation algorithm that transforms the existing data of text-to-SQL into the data of NL2ERM.
no code implementations • ACL 2020 • Zhenwen Li, Wenhao Wu, Sujian Li
In this paper, we argue that elementary discourse unit (EDU) is a more appropriate textual unit of content selection than the sentence unit in abstractive summarization.
1 code implementation • 21 Nov 2019 • Tunhou Zhang, Hsin-Pai Cheng, Zhenwen Li, Feng Yan, Chengyu Huang, Hai Li, Yiran Chen
Specifically, both ShrinkCNN and ShrinkRNN are crafted within 1. 5 GPU hours, which is 7. 2x and 6. 7x faster than the crafting time of SOTA CNN and RNN models, respectively.