no code implementations • COLING 2022 • Jialin Chen, Zhuosheng Zhang, Hai Zhao
Machine reading comprehension (MRC) poses new challenges to logical reasoning, which aims to understand the implicit logical relations entailed in the given contexts and perform inference over them.
no code implementations • 8 Feb 2023 • Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, Diyi Yang
Spurred by advancements in scale, large language models (LLMs) have demonstrated the ability to perform a variety of natural language processing (NLP) tasks zero-shot -- i. e., without adaptation on downstream data.
3 code implementations • 2 Feb 2023 • Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola
Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer.
Ranked #1 on
Science Question Answering
on ScienceQA
no code implementations • 10 Jan 2023 • Zhuosheng Zhang, Hai Zhao, Longxiang Liu
We decouple the contextualized word representations by masking mechanisms in Transformer-based PrLM, making each word only focus on the words in current utterance, other utterances, and two speaker roles (i. e., utterances of sender and utterances of the receiver), respectively.
no code implementations • 9 Jan 2023 • Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, Hai Zhao
Representation learning is the foundation of natural language processing (NLP).
no code implementations • 16 Dec 2022 • Junlong Li, Zhuosheng Zhang, Hai Zhao
Open-Domain Question Answering (ODQA) requires models to answer factoid questions with no context given.
no code implementations • 1 Dec 2022 • Zhuosheng Zhang, Hai Zhao, Masao Utiyama, Eiichiro Sumita
Discriminative pre-trained language models (PLMs) learn to predict original texts from intentionally corrupted ones.
1 code implementation • 23 Oct 2022 • Wenhao Yu, Chenguang Zhu, Zhihan Zhang, Shuohang Wang, Zhuosheng Zhang, Yuwei Fang, Meng Jiang
However, applying such methods to commonsense reasoning tasks faces two unique challenges, i. e., the lack of a general large-scale corpus for retrieval and a corresponding effective commonsense retriever.
no code implementations • 13 Oct 2022 • Sizhe Zhou, Siru Ouyang, Zhuosheng Zhang, Hai Zhao
In open-retrieval conversational machine reading (OR-CMR) task, machines are required to do multi-turn question answering given dialogue history and a textual knowledge base.
1 code implementation • 12 Oct 2022 • Zhuosheng Zhang, Shuohang Wang, Yichong Xu, Yuwei Fang, Wenhao Yu, Yang Liu, Hai Zhao, Chenguang Zhu, Michael Zeng
Leveraging task-aware annotated data as supervised signals to assist with self-supervised learning on large-scale unlabeled data has become a new trend in pre-training language models.
1 code implementation • 11 Oct 2022 • Zhuosheng Zhang, Hai Zhao, Ming Zhou
They treat training instances equally throughout the training process, with little attention on the individual contribution of those instances.
1 code implementation • 7 Oct 2022 • Zhuosheng Zhang, Aston Zhang, Mu Li, Alex Smola
Providing these steps for prompting demonstrations is called chain-of-thought (CoT) prompting.
no code implementations • 23 Aug 2022 • Dongjie Yang, Zhuosheng Zhang, Hai Zhao
Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs).
no code implementations • 21 Jul 2022 • Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, Hai Zhao
However, we find that most existing textual adversarial examples are unnatural, which can be easily distinguished by both human and machine.
1 code implementation • 18 Apr 2022 • Yiyang Li, Hai Zhao, Zhuosheng Zhang
Multi-turn dialogue modeling as a challenging branch of natural language understanding (NLU), aims to build representations for machines to understand human dialogues, which provides a solid foundation for multiple downstream tasks.
1 code implementation • Findings (ACL) 2022 • Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, Hai Zhao
We question the validity of current evaluation of robustness of PrLMs based on these non-natural adversarial samples and propose an anomaly detector to evaluate the robustness of PrLMs with more natural adversarial samples.
no code implementations • 23 Nov 2021 • Xiangxiang Zhu, Kunde Yang, Zhuosheng Zhang
By the analysis of the properties of the IF equation, we prove that a good IF equation can unify the well-known IF and group delay estimators and provides an effective way to characterize the mixture of time-varying and frequency-varying signals.
1 code implementation • ACL 2022 • Baorong Huang, Zhuosheng Zhang, Hai Zhao
In this paper, we imitate the human reading process in connecting the anaphoric expressions and explicitly leverage the coreference information of the entities to enhance the word embeddings from the pre-trained language model, in order to highlight the coreference mentions of the entities that must be identified for coreference-intensive question answering in QUOREF, a relatively new dataset that is specifically designed to evaluate the coreference-related performance of a model.
1 code implementation • ACL 2022 • Xinbei Ma, Zhuosheng Zhang, Hai Zhao
Tangled multi-party dialogue contexts lead to challenges for dialogue reading comprehension, where multiple dialogue threads flow simultaneously within a common dialogue record, increasing difficulties in understanding the dialogue history for both human and machine.
no code implementations • ACL 2022 • Bohong Wu, Zhuosheng Zhang, JinYuan Wang, Hai Zhao
In detail, we introduce an in-passage negative sampling strategy to encourage a diverse generation of sentence representations within the same passage.
1 code implementation • 13 Oct 2021 • Zhuosheng Zhang, Hanqing Zhang, Keming Chen, Yuhang Guo, Jingyun Hua, Yulong Wang, Ming Zhou
Although pre-trained models (PLMs) have achieved remarkable improvements in a wide range of NLP tasks, they are expensive in terms of time and resources.
no code implementations • 11 Oct 2021 • Zhuosheng Zhang, Hai Zhao
In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.
1 code implementation • PACLIC 2021 • Yuchen He, Zhuosheng Zhang, Hai Zhao
Multi-party dialogue machine reading comprehension (MRC) raises an even more challenging understanding goal on dialogue with more than two involved speakers, compared with the traditional plain passage style MRC.
Ranked #1 on
Discourse Parsing
on Molweni
no code implementations • 29 Sep 2021 • Siru Ouyang, Zhuosheng Zhang, Hai Zhao
Pre-trained language models (PrLMs) have been shown useful for enhancing a broad range of natural language understanding (NLU) tasks.
no code implementations • 9 Sep 2021 • Xinbei Ma, Zhuosheng Zhang, Hai Zhao
Multi-party multi-turn dialogue comprehension brings unprecedented challenges on handling the complicated scenarios from multiple speakers and criss-crossed discourse relationship among speaker-aware utterances.
Ranked #1 on
Question Answering
on Molweni
no code implementations • Findings (EMNLP) 2021 • Rongzhou Bao, Zhuosheng Zhang, Hai Zhao
Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words.
1 code implementation • EMNLP 2021 • Zhuosheng Zhang, Siru Ouyang, Hai Zhao, Masao Utiyama, Eiichiro Sumita
In this work, we propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation to provide a richer dialogue state reference.
no code implementations • 2 Aug 2021 • Xiangxiang Zhu, Bei Li, Kunde Yang, Zhuosheng Zhang, Wenting Li
The standard chirplet transform (CT) with a chirp-modulated Gaussian window provides a valuable tool for analyzing linear chirp signals.
no code implementations • 27 Jul 2021 • Zuchao Li, Kevin Parnow, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita
Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant.
no code implementations • 25 Jul 2021 • Bohong Wu, Zhuosheng Zhang, Hai Zhao
Multi-hop reading comprehension (MHRC) requires not only to predict the correct answer span in the given passage, but also to provide a chain of supporting evidences for reasoning interpretability.
no code implementations • 4 Jun 2021 • Zhuosheng Zhang, Shucheng Yu
However, extending BO to the setting of DBA is nontrivial because in DBA only output labels instead of real-valued scores, as needed by BO, are available to attackers.
no code implementations • ACL 2021 • Zhuosheng Zhang, Hai Zhao
Pre-trained language models (PrLMs) have demonstrated superior performance due to their strong ability to learn universal language representations from self-supervised pre-training.
1 code implementation • NeurIPS 2021 • Siru Ouyang, Zhuosheng Zhang, Hai Zhao
Therefore, we argue that the natural logic units would be the group of backbone constituents of the sentence such as the subject-verb-object formed "facts", covering both global and local knowledge pieces that are necessary as the basis for logical reasoning.
Ranked #24 on
Reading Comprehension
on ReClor
no code implementations • 4 Mar 2021 • Zhuosheng Zhang, Hai Zhao
In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.
no code implementations • 11 Feb 2021 • Zuchao Li, Zhuosheng Zhang, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita
In this paper, we propose explicit and implicit text compression approaches to enhance the Transformer encoding and evaluate models using this approach on several typical downstream tasks that rely on the encoding heavily.
no code implementations • 10 Feb 2021 • Zhuosheng Zhang, Junlong Li, Hai Zhao
Experimental results on four dialogue comprehension benchmark tasks show that our proposed model achieves great improvements on baselines.
no code implementations • 4 Feb 2021 • Zhuosheng Zhang, Jiarui Li, Shucheng Yu, Christian Makaya
For model privacy, local model parameters in federated learning shall be obfuscated before sent to the remote aggregator.
no code implementations • 1 Jan 2021 • Rongzhou Bao, Zhuosheng Zhang, Hai Zhao
Instead of too early fixing the linguistic unit input as nearly all previous work did, we propose a novel method that combines span-level information into the representations generated by PrLMs during fine-tuning phase for better flexibility.
no code implementations • 1 Jan 2021 • Zuchao Li, Kevin Barry Parnow, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita
Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant.
no code implementations • 30 Dec 2020 • Rongzhou Bao, Jiayi Wang, Zhuosheng Zhang, Hai Zhao
By substituting complex words with simple alternatives, lexical simplification (LS) is a recognized method to reduce such lexical diversity, and therefore to improve the understandability of sentences.
no code implementations • 30 Dec 2020 • Zhuosheng Zhang, Haojie Yu, Hai Zhao, Rui Wang, Masao Utiyama
Word representation is a fundamental component in neural language understanding models.
1 code implementation • Findings (ACL) 2021 • Siru Ouyang, Zhuosheng Zhang, Hai Zhao
Conversational Machine Reading (CMR) aims at answering questions in a complicated manner.
no code implementations • 27 Dec 2020 • Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang
In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention.
1 code implementation • 7 Dec 2020 • Yilin Zhao, Zhuosheng Zhang, Hai Zhao
Thus we propose a novel reference-based knowledge enhancement model called Reference Knowledgeable Network (RekNet), which simulates human reading strategies to refine critical information from the passage and quote explicit knowledge in necessity.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Junru Zhou, Zhuosheng Zhang, Hai Zhao, Shuailiang Zhang
Besides, LIMIT-BERT takes a semi-supervised learning strategy to offer the same large amount of linguistics task data as that for the language model training.
1 code implementation • 26 Sep 2020 • Yi Xu, Hai Zhao, Zhuosheng Zhang
In the retrieval-based multi-turn dialogue modeling, it remains a challenge to select the most appropriate response according to extracting salient features in context utterances.
no code implementations • 15 Sep 2020 • Junjie Yang, Zhuosheng Zhang, Hai Zhao
Generative machine reading comprehension (MRC) requires a model to generate well-formed answers.
1 code implementation • 14 Sep 2020 • Longxiang Liu, Zhuosheng Zhang, Hai Zhao, Xi Zhou, Xiang Zhou
A multi-turn dialogue is composed of multiple utterances from two or more different speaker roles.
no code implementations • 14 Sep 2020 • Zhuosheng Zhang, Yiqing Zhang, Hai Zhao, Xi Zhou, Xiang Zhou
This paper presents a novel method to generate answers for non-extraction machine reading comprehension (MRC) tasks whose answers cannot be simply extracted as one span from the given passages.
1 code implementation • 10 Sep 2020 • Junlong Li, Zhuosheng Zhang, Hai Zhao
Pre-trained language models (PrLMs) have achieved great success on a wide range of natural language processing tasks by virtue of the universal language representation ability obtained by self-supervised learning on a large corpus.
1 code implementation • 13 May 2020 • Zhuosheng Zhang, Hai Zhao, Rui Wang
In this survey, we provide a comprehensive and comparative review on MRC covering overall research topics about 1) the origin and development of MRC and CLM, with a particular focus on the role of CLMs; 2) the impact of MRC and CLM to the NLP community; 3) the definition, datasets, and evaluation of MRC; 4) general MRC architecture and technical methods in the view of two-stage Encoder-Decoder solving architecture from the insights of the cognitive process of humans; 5) previous highlights, emerging topics, and our empirical analysis, among which we especially focus on what works in different periods of MRC researches.
1 code implementation • ICLR 2020 • Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, Hai Zhao
Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations.
no code implementations • ICLR 2020 • Zuchao Li, Rui Wang, Kehai Chen, Masso Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao
However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect.
no code implementations • 29 Apr 2020 • Junlong Li, Zhuosheng Zhang, Hai Zhao
In this paper, the relevance of each turn to the question are calculated to choose key turns.
2 code implementations • 27 Jan 2020 • Zhuosheng Zhang, Junjie Yang, Hai Zhao
Inspired by how humans solve reading comprehension questions, we proposed a retrospective reader (Retro-Reader) that integrates two stages of reading and verification strategies: 1) sketchy reading that briefly investigates the overall interactions of passage and question, and yield an initial judgment; 2) intensive reading that verifies the answer and gives the final prediction.
Ranked #7 on
Question Answering
on SQuAD2.0
1 code implementation • 27 Dec 2019 • Zuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao
In this paper, we propose an explicit sentence compression method to enhance the source sentence representation for NMT.
no code implementations • 7 Nov 2019 • Zhuosheng Zhang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Hai Zhao
We present a universal framework to model contextualized sentence representations with visual awareness that is motivated to overcome the shortcomings of the multimodal parallel data with manual annotations.
no code implementations • CONLL 2019 • Zuchao Li, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita
This paper describes our SJTU-NICT{'}s system for participating in the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL).
no code implementations • 31 Oct 2019 • Junru Zhou, Zhuosheng Zhang, Hai Zhao, Shuailiang Zhang
In this paper, we present a Linguistic Informed Multi-Task BERT (LIMIT-BERT) for learning language representations across multiple linguistic tasks by Multi-Task Learning (MTL).
1 code implementation • 5 Sep 2019 • Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou
The latest work on language representations carefully integrates contextualized features into language model training, which enables a series of success especially in various machine reading comprehension and natural language inference tasks.
Ranked #6 on
Natural Language Inference
on SNLI
no code implementations • 3 Sep 2019 • Zhuosheng Zhang, Bingjie Tang, Zuchao Li, Hai Zhao
This work models named entity distribution from a way of visualizing topological structure of embedding space, so that we make an assumption that most, if not all, named entities (NEs) for a language tend to aggregate together to be accommodated by a specific hypersphere in embedding space.
no code implementations • 3 Sep 2019 • Zhuosheng Zhang, Zhen Meng, Hai Zhao
This paper presents a smart sliding Chinese pinyin Input Method Editor (IME) for touchscreen devices which allows user finger sliding from one key to another on the touchscreen instead of tapping keys one by one, while the target Chinese character sequence will be predicted during the sliding process to help user input Chinese characters efficiently.
no code implementations • 31 Aug 2019 • Ying Luo, Hai Zhao, Zhuosheng Zhang, Bingjie Tang
For monolingual cases, the proposed named entity model gives an open description of diverse named entity types and different languages.
2 code implementations • 30 Aug 2019 • Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, Xiang Zhou
Multi-choice reading comprehension is a challenging task to select an answer from a set of candidate options when given passage and question.
1 code implementation • 14 Aug 2019 • Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang
In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention.
Ranked #5 on
Question Answering
on SQuAD2.0 dev
no code implementations • 22 Apr 2019 • Shu Jiang, Zhuosheng Zhang, Hai Zhao, Jiangtong Li, Yang Yang, Bao-liang Lu, Ning Xia
Chemical reaction practicality is the core task among all symbol intelligence based chemical information processing, for example, it provides indispensable clue for further automatic synthesis route inference.
no code implementations • 27 Jan 2019 • Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, Xiang Zhou
Multi-choice reading comprehension is a challenging task that requires complex reasoning procedure.
Ranked #3 on
Question Answering
on RACE
1 code implementation • 16 Jan 2019 • Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, Xiang Zhou
Semantic role labeling (SRL) aims to discover the predicateargument structure of a sentence.
Ranked #9 on
Semantic Role Labeling
on CoNLL 2005
1 code implementation • ACL 2019 • Zhuosheng Zhang, Yafang Huang, Hai Zhao
Pinyin-to-character (P2C) conversion is the core component of pinyin-based Chinese input method engine (IME).
1 code implementation • 6 Nov 2018 • Zhuosheng Zhang, Hai Zhao, Kangwei Ling, Jiangtong Li, Zuchao Li, Shexia He, Guohong Fu
Representation learning is the foundation of machine reading comprehension and inference.
1 code implementation • EMNLP 2018 • Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, Luo Si
Semantic role labeling (SRL) aims to recognize the predicate-argument structure of a sentence.
1 code implementation • CONLL 2018 • Zuchao Li, Shexia He, Zhuosheng Zhang, Hai Zhao
This paper describes the system of team LeisureX in the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.
no code implementations • 8 Sep 2018 • Zhuosheng Zhang, Yuwei Wu, Zuchao Li, Hai Zhao
Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL) task.
Ranked #9 on
Natural Language Inference
on SNLI
Machine Reading Comprehension
Natural Language Understanding
+1
no code implementations • 8 Sep 2018 • Zhuosheng Zhang, Shexia He, Zuchao Li, Hai Zhao
The goal of semantic role labeling (SRL) is to discover the predicate-argument structure of a sentence, which plays a critical role in deep processing of natural language.
no code implementations • COLING 2018 • Pengfei Zhu, Zhuosheng Zhang, Jiangtong Li, Yafang Huang, Hai Zhao
Traditional chatbots usually need a mass of human dialogue data, especially when using supervised machine learning method.
no code implementations • 7 Aug 2018 • Zhuosheng Zhang, Yafang Huang, Pengfei Zhu, Hai Zhao
Machine reading comprehension is a task to model relationship between passage and query.
no code implementations • ACL 2018 • Yafang Huang, Zuchao Li, Zhuosheng Zhang, Hai Zhao
Chinese pinyin input method engine (IME) lets user conveniently input Chinese into a computer by typing pinyin through the common keyboard.
1 code implementation • COLING 2018 • Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, Gongshen Liu
In this paper, we formulate previous utterances into context using a proposed deep utterance aggregation model to form a fine-grained context representation.
Ranked #11 on
Conversational Response Selection
on E-commerce
1 code implementation • COLING 2018 • Zhuosheng Zhang, Yafang Huang, Hai Zhao
Representation learning is the foundation of machine reading comprehension.
1 code implementation • COLING 2018 • Zhuosheng Zhang, Hai Zhao
Answering questions from university admission exams (Gaokao in Chinese) is a challenging AI task since it requires effective representation to capture complicated semantic relations between questions and answers.
no code implementations • SEMEVAL 2018 • Zhuosheng Zhang, Jiangtong Li, Hai Zhao, Bingjie Tang
This paper describes a hypernym discovery system for our participation in the SemEval-2018 Task 9, which aims to discover the best (set of) candidate hypernyms for input concepts or entities, given the search space of a pre-defined vocabulary.
Ranked #5 on
Hypernym Discovery
on Music domain