1 code implementation • 22 Mar 2023 • Fengji Zhang, Bei Chen, Yue Zhang, Jacky Keung, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, Weizhu Chen
The task of repository-level code completion is to continue writing the unfinished code based on a broader context of the repository.
no code implementations • 16 Mar 2023 • Shushan Arakelyan, Rocktim Jyoti Das, Yi Mao, Xiang Ren
We systematically study how three large language models with code capabilities - CodeT5, Codex, and ChatGPT - generalize to out-of-domain data.
no code implementations • 20 Dec 2022 • Dong Li, Yelong Shen, Ruoming Jin, Yi Mao, Kuan Wang, Weizhu Chen
Pre-trained language models have achieved promising success in code retrieval tasks, where a natural language documentation query is given to find the most relevant existing code snippet.
no code implementations • 22 Nov 2022 • Jason Phang, Yi Mao, Pengcheng He, Weizhu Chen
Fine-tuning large language models for different tasks can be costly and inefficient, and even methods that reduce the number of tuned parameters still require full gradient-based optimization.
1 code implementation • 15 Oct 2022 • Ziqing Wang, Zhirong Ye, Yuyang Du, Yi Mao, Yanying Liu, Ziling Wu, Jun Wang
DBSCAN has been widely used in density-based clustering algorithms.
no code implementations • 13 Oct 2022 • Shiyang Li, Jianshu Chen, Yelong Shen, Zhiyu Chen, Xinlu Zhang, Zekun Li, Hong Wang, Jing Qian, Baolin Peng, Yi Mao, Wenhu Chen, Xifeng Yan
Integrating free-text explanations to in-context learning of large language models (LLM) is shown to elicit strong reasoning capabilities along with reasonable explanations.
1 code implementation • NAACL 2022 • Zhengbao Jiang, Yi Mao, Pengcheng He, Graham Neubig, Weizhu Chen
The information in tables can be an important complement to text, making table-based question answering (QA) systems of great value.
Ranked #8 on Semantic Parsing on WikiTableQuestions
no code implementations • NAACL (ACL) 2022 • Abedelkadir Asi, Song Wang, Roy Eisenstadt, Dean Geckt, Yarin Kuper, Yi Mao, Royi Ronen
Summarizing sales calls is a routine task performed manually by salespeople.
1 code implementation • ACL 2022 • Wei Chen, Yeyun Gong, Song Wang, Bolun Yao, Weizhen Qi, Zhongyu Wei, Xiaowu Hu, Bartuer Zhou, Yi Mao, Weizhu Chen, Biao Cheng, Nan Duan
Dialog response generation in open domain is an important research topic where the main challenge is to generate relevant and diverse responses.
no code implementations • NAACL 2022 • Yu Li, Baolin Peng, Yelong Shen, Yi Mao, Lars Liden, Zhou Yu, Jianfeng Gao
To address these challenges, we present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation for knowledge-grounded dialogue generation tasks.
2 code implementations • ACL 2022 • Tianyu Liu, Yizhe Zhang, Chris Brockett, Yi Mao, Zhifang Sui, Weizhu Chen, Bill Dolan
Large pretrained generative models like GPT-3 often suffer from hallucinating non-existent or incorrect content, which undermines their potential merits in real applications.
1 code implementation • EMNLP 2021 • Jungo Kasai, Hao Peng, Yizhe Zhang, Dani Yogatama, Gabriel Ilharco, Nikolaos Pappas, Yi Mao, Weizhu Chen, Noah A. Smith
Specifically, we propose a swap-then-finetune procedure: in an off-the-shelf pretrained transformer, we replace the softmax attention with its linear-complexity recurrent alternative and then finetune.
Ranked #2 on Machine Translation on WMT2017 Chinese-English
no code implementations • 27 Jan 2021 • Bohua Li, Jianrong Tan, Yi Mao
The 21 cm linear polarization due to Thomson scattering off free electrons can probe the distribution of neutral hydrogen in the intergalactic medium during the epoch of reionization, complementary to the 21 cm temperature fluctuations.
Cosmology and Nongalactic Astrophysics Astrophysics of Galaxies Instrumentation and Methods for Astrophysics
1 code implementation • 5 Jan 2021 • Michele Bianco, Ilian T. Iliev, Kyungjin Ahn, Sambit K. Giri, Yi Mao, Hyunbae Park, Paul R. Shapiro
Unresolved fluctuations in numerical simulations and analytical calculations are included using a gas clumping factor, typically assumed to be independent of the local environment.
Cosmology and Nongalactic Astrophysics
no code implementations • EMNLP 2020 • Tao Shen, Yi Mao, Pengcheng He, Guodong Long, Adam Trischler, Weizhu Chen
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training, to inject language models with structured knowledge via learning from raw text.
no code implementations • 19 Feb 2020 • Hayato Shimabukuro, Yi Mao, Jianrong Tan
The bubble size distribution of ionized hydrogen regions probes the information about the morphology of \HII\ bubbles during the reionization.
Cosmology and Nongalactic Astrophysics
no code implementations • 18 Feb 2020 • Yujia Xie, Tianyi Zhou, Yi Mao, Weizhu Chen
Thereby, the contextual dependencies modeled by CSA will be highly relevant to the query.
no code implementations • 21 Aug 2019 • Pengcheng He, Yi Mao, Kaushik Chakrabarti, Weizhu Chen
In this work, we present X-SQL, a new network architecture for the problem of parsing natural language to SQL query.
no code implementations • 13 Sep 2018 • Tianze Shi, Kedar Tatwawadi, Kaushik Chakrabarti, Yi Mao, Oleksandr Polozov, Weizhu Chen
We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory.
1 code implementation • 9 Jul 2018 • Chenglong Wang, Kedar Tatwawadi, Marc Brockschmidt, Po-Sen Huang, Yi Mao, Oleksandr Polozov, Rishabh Singh
We consider the problem of neural semantic parsing, which translates natural language questions into executable SQL queries.
no code implementations • ICLR 2018 • Hao Liu*, Yihao Feng*, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems.
2 code implementations • 30 Oct 2017 • Hao Liu, Yihao Feng, Yi Mao, Dengyong Zhou, Jian Peng, Qiang Liu
Policy gradient methods have achieved remarkable successes in solving challenging reinforcement learning problems.
no code implementations • NeurIPS 2012 • Dengyong Zhou, Sumit Basu, Yi Mao, John C. Platt
We propose a minimax entropy principle to improve the quality of these labels.