no code implementations • EMNLP (MRL) 2021 • Peng Shi, Rui Zhang, He Bai, Jimmy Lin
Dense retrieval has shown great success for passage ranking in English.
1 code implementation • ACL 2022 • He Bai, Tong Wang, Alessandro Sordoni, Peng Shi
Class-based language models (LMs) have been long devised to address context sparsity in $n$-gram LMs.
1 code implementation • 16 Jan 2022 • Tianbao Xie, Chen Henry Wu, Peng Shi, Ruiqi Zhong, Torsten Scholak, Michihiro Yasunaga, Chien-Sheng Wu, Ming Zhong, Pengcheng Yin, Sida I. Wang, Victor Zhong, Bailin Wang, Chengzu Li, Connor Boyle, Ansong Ni, Ziyu Yao, Dragomir Radev, Caiming Xiong, Lingpeng Kong, Rui Zhang, Noah A. Smith, Luke Zettlemoyer, Tao Yu
Structured knowledge grounding (SKG) leverages structured knowledge to complete user requests, such as semantic parsing over databases and question answering over knowledge bases.
Ranked #1 on
Task-Oriented Dialogue Systems
on KVRET
no code implementations • 4 Jan 2022 • Shiqi Zheng, Peng Shi, Huiyan Zhang
This study focuses on periodic event-triggered (PET) cooperative output regulation problem for a class of nonlinear multi-agent systems.
no code implementations • WNUT (ACL) 2021 • Mengyi Gao, Canran Xu, Peng Shi
State-of-the-art approaches to spelling error correction problem include Transformer-based Seq2Seq models, which require large training sets and suffer from slow inference time; and sequence labeling models based on Transformer encoders like BERT, which involve token-level label space and therefore a large pre-defined vocabulary dictionary.
no code implementations • 15 Sep 2021 • Naihao Deng, Shuaichen Chang, Peng Shi, Tao Yu, Rui Zhang
Existing text-to-SQL research only considers complete questions as the input, but lay-users might strive to formulate a complete question.
no code implementations • 3 Sep 2021 • Peng Shi, Rui Zhang, He Bai, Jimmy Lin
Dense retrieval has shown great success in passage ranking in English.
1 code implementation • EMNLP (MRL) 2021 • Xinyu Zhang, Xueguang Ma, Peng Shi, Jimmy Lin
We present Mr. TyDi, a multi-lingual benchmark dataset for mono-lingual retrieval in eleven typologically diverse languages, designed to evaluate ranking with learned dense representations.
1 code implementation • Findings (ACL) 2021 • Chang Shu, Yusen Zhang, Xiangyu Dong, Peng Shi, Tao Yu, Rui Zhang
Text generation from semantic parses is to generate textual descriptions for formal representation inputs such as logic forms and SQL queries.
no code implementations • 17 Jun 2021 • Peng Shi, Tao Yu, Patrick Ng, Zhiguo Wang
Furthermore, we propose two value filling methods to build the bridge from the existing zero-shot semantic parsers to real-world applications, considering most of the existing parsers ignore the values filling in the synthesized SQL.
3 code implementations • 18 Dec 2020 • Peng Shi, Patrick Ng, Zhiguo Wang, Henghui Zhu, Alexander Hanbo Li, Jun Wang, Cicero Nogueira dos santos, Bing Xiang
Most recently, there has been significant interest in learning contextual representations for various NLP tasks, by leveraging large scale text corpora to train large neural language models with self-supervised learning objectives, such as Masked Language Model (MLM).
Ranked #3 on
Semantic Parsing
on spider
no code implementations • Findings of the Association for Computational Linguistics 2020 • Peng Shi, He Bai, Jimmy Lin
We tackle the challenge of cross-lingual training of neural document ranking models for mono-lingual retrieval, specifically leveraging relevance judgments in English to improve search in non-English languages.
1 code implementation • 23 Oct 2020 • Yusen Zhang, Xiangyu Dong, Shuaichen Chang, Tao Yu, Peng Shi, Rui Zhang
Neural models have achieved significant results on the text-to-SQL task, in which most current work assumes all the input questions are legal and generates a SQL query for any input.
no code implementations • 16 Jul 2020 • Afshin Shoeibi, Marjane Khodatars, Roohallah Alizadehsani, Navid Ghassemi, Mahboobeh Jafari, Parisa Moridian, Ali Khadem, Delaram Sadeghi, Sadiq Hussain, Assef Zare, Zahra Alizadeh Sani, Javad Bazeli, Fahime Khozeimeh, Abbas Khosravi, Saeid Nahavandi, U. Rajendra Acharya, Peng Shi
Hence, artificial intelligence (AI) methodologies can be used to obtain consistent high performance.
no code implementations • 27 May 2020 • Peng Shi
In classical mechanics, the motion of an object is described with Newton's three laws of motion, which means that the motion of the material elements composing a continuum can be described with the particle model.
Classical Physics Materials Science Fluid Dynamics
1 code implementation • 30 Apr 2020 • He Bai, Peng Shi, Jimmy Lin, Yuqing Xie, Luchen Tan, Kun Xiong, Wen Gao, Ming Li
To verify this, we propose a segment-aware Transformer (Segatron), by replacing the original token position encoding with a combined position encoding of paragraph, sentence, and token.
Ranked #9 on
Language Modelling
on WikiText-103
1 code implementation • ACL 2021 • He Bai, Peng Shi, Jimmy Lin, Luchen Tan, Kun Xiong, Wen Gao, Jie Liu, Ming Li
Experimental results show that the Chinese GPT2 can generate better essay endings with \eop.
no code implementations • 8 Nov 2019 • Peng Shi, Jimmy Lin
Recent work has shown the surprising ability of multi-lingual BERT to serve as a zero-shot cross-lingual transfer model for a number of language processing tasks.
no code implementations • IJCNLP 2019 • Jinfeng Rao, Linqing Liu, Yi Tay, Wei Yang, Peng Shi, Jimmy Lin
A core problem of information retrieval (IR) is relevance matching, which is to rank documents by relevance to a user{'}s query.
1 code implementation • IJCNLP 2019 • Hsiu-Wei Yang, Yanyan Zou, Peng Shi, Wei Lu, Jimmy Lin, Xu sun
Multilingual knowledge graphs (KGs), such as YAGO and DBpedia, represent entities in different languages.
3 code implementations • 10 Apr 2019 • Peng Shi, Jimmy Lin
We present simple BERT-based models for relation extraction and semantic role labeling.
Ranked #22 on
Relation Extraction
on TACRED
1 code implementation • 15 Mar 2019 • Michael Azmy, Peng Shi, Jimmy Lin, Ihab F. Ilyas
This paper explores the problem of matching entities across different knowledge graphs.
no code implementations • NAACL 2019 • Peng Shi, Jinfeng Rao, Jimmy Lin
This paper explores the problem of ranking short social media posts with respect to user queries using neural networks.
1 code implementation • COLING 2018 • Michael Azmy, Peng Shi, Jimmy Lin, Ihab Ilyas
To address this problem, we present SimpleDBpediaQA, a new benchmark dataset for simple question answering over knowledge graphs that was created by mapping SimpleQuestions entities and predicates from Freebase to DBpedia.
no code implementations • 8 Mar 2018 • Shuqing Bian, Zhenpeng Deng, Fei Li, Will Monroe, Peng Shi, Zijun Sun, Wei Wu, Sikuang Wang, William Yang Wang, Arianna Yuan, Tianwei Zhang, Jiwei Li
For the best setting, the proposed system is able to identify scam ICO projects with 0. 83 precision.
no code implementations • NAACL 2018 • Salman Mohammed, Peng Shi, Jimmy Lin
We examine the problem of question answering over knowledge graphs, focusing on simple questions that can be answered by the lookup of a single fact.