no code implementations • EMNLP (ACL) 2021 • Hai Zhao, Rui Wang, Kehai Chen
This tutorial surveys the latest technical progress of syntactic parsing and the role of syntax in end-to-end natural language processing (NLP) tasks, in which semantic role labeling (SRL) and machine translation (MT) are the representative NLP tasks that have always been beneficial from informative syntactic clues since a long time ago, though the advance from end-to-end deep learning models shows new results.
1 code implementation • COLING 2022 • Ziming Cheng, Zuchao Li, Hai Zhao
Abstract Meaning Representation (AMR) offers a unified semantic representation for natural language sentences.
Ranked #7 on
AMR Parsing
on LDC2020T02
1 code implementation • COLING 2022 • Yifei Yang, Zuchao Li, Hai Zhao
Thus in order to address this mismatch, this work models the full nested NEs in a sentence as a holistic structure, then we propose a holistic structure parsing algorithm to disclose the entire NEs once for all.
1 code implementation • COLING 2022 • Yifei Yang, Hai Zhao
Existing studies typically handle aspect-based sentiment analysis by stacking multiple neural modules, which inevitably result in severe error propagation.
Aspect-Based Sentiment Analysis (ABSA)
Machine Reading Comprehension
1 code implementation • COLING 2022 • Jiawei Wang, Hai Zhao
ArT is totally unsupervised and KBs-free.
no code implementations • COLING 2022 • Jialin Chen, Zhuosheng Zhang, Hai Zhao
Machine reading comprehension (MRC) poses new challenges to logical reasoning, which aims to understand the implicit logical relations entailed in the given contexts and perform inference over them.
no code implementations • ACL 2022 • Zuchao Li, Masao Utiyama, Eiichiro Sumita, Hai Zhao
Although this can satisfy the requirements overall, it usually requires a larger beam size and far longer decoding time than unrestricted translation, which limits the concurrent processing ability of the translation model in deployment, and thus its practicality.
no code implementations • Findings (ACL) 2022 • Zuchao Li, Yiran Wang, Masao Utiyama, Eiichiro Sumita, Hai Zhao, Taro Watanabe
Inspired by this discovery, we then propose approaches to improving it, with respect to model structure and model training, to make the deep decoder practical in NMT.
no code implementations • ACL (WAT) 2021 • Zuchao Li, Masao Utiyama, Eiichiro Sumita, Hai Zhao
This paper describes our system (Team ID: nictrb) for participating in the WAT’21 restricted machine translation task.
no code implementations • Findings (EMNLP) 2021 • Jiawei Wang, Hai Zhao, Yinggong Zhao, Libin Shen
Machine reading comprehension (MRC) is a challenging NLP task for it requires to carefully deal with all linguistic granularities from word, sentence to passage.
no code implementations • WMT (EMNLP) 2021 • Zuchao Li, Masao Utiyama, Eiichiro Sumita, Hai Zhao
In this paper, we describe our MiSS system that participated in the WMT21 news translation task.
no code implementations • EMNLP 2021 • Zuchao Li, Masao Utiyama, Eiichiro Sumita, Hai Zhao
Machine translation usually relies on parallel corpora to provide parallel signals for training.
no code implementations • EMNLP (ACL) 2021 • Zuchao Li, Kevin Parnow, Masao Utiyama, Eiichiro Sumita, Hai Zhao
With this system, we aim to provide a complete translation experience for machine translation users.
no code implementations • WMT (EMNLP) 2020 • Zuchao Li, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita
In this paper, we introduced our joint team SJTU-NICT ‘s participation in the WMT 2020 machine translation shared task.
3 code implementations • 2 Feb 2023 • Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola
Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer.
Ranked #1 on
Science Question Answering
on ScienceQA
no code implementations • 10 Jan 2023 • Zhuosheng Zhang, Hai Zhao, Longxiang Liu
We decouple the contextualized word representations by masking mechanisms in Transformer-based PrLM, making each word only focus on the words in current utterance, other utterances, and two speaker roles (i. e., utterances of sender and utterances of the receiver), respectively.
no code implementations • 9 Jan 2023 • Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, Hai Zhao
Representation learning is the foundation of natural language processing (NLP).
no code implementations • 16 Dec 2022 • Junlong Li, Zhuosheng Zhang, Hai Zhao
Open-Domain Question Answering (ODQA) requires models to answer factoid questions with no context given.
no code implementations • 1 Dec 2022 • Zhuosheng Zhang, Hai Zhao, Masao Utiyama, Eiichiro Sumita
Discriminative pre-trained language models (PLMs) learn to predict original texts from intentionally corrupted ones.
2 code implementations • 19 Oct 2022 • Hongqiu Wu, Ruixue Ding, Hai Zhao, Boli Chen, Pengjun Xie, Fei Huang, Min Zhang
Multiple pre-training objectives fill the vacancy of the understanding capability of single-objective language modeling, which serves the ultimate purpose of pre-trained language models (PrLMs), generalizing well on a mass of scenarios.
1 code implementation • 16 Oct 2022 • Bohong Wu, Hai Zhao
Though offering amazing contextualized token-level representations, current pre-trained language models take less attention on accurately acquiring sentence-level representation during their self-supervised pre-training.
no code implementations • 13 Oct 2022 • Sizhe Zhou, Siru Ouyang, Zhuosheng Zhang, Hai Zhao
In open-retrieval conversational machine reading (OR-CMR) task, machines are required to do multi-turn question answering given dialogue history and a textual knowledge base.
1 code implementation • 12 Oct 2022 • Zhuosheng Zhang, Shuohang Wang, Yichong Xu, Yuwei Fang, Wenhao Yu, Yang Liu, Hai Zhao, Chenguang Zhu, Michael Zeng
Leveraging task-aware annotated data as supervised signals to assist with self-supervised learning on large-scale unlabeled data has become a new trend in pre-training language models.
1 code implementation • 11 Oct 2022 • Zhuosheng Zhang, Hai Zhao, Ming Zhou
They treat training instances equally throughout the training process, with little attention on the individual contribution of those instances.
1 code implementation • COLING 2022 • Yiyang Li, Hongqiu Wu, Hai Zhao
Based on the tremendous success of pre-trained language models (PrLMs) for source code comprehension tasks, current literature studies either ways to further improve the performance (generalization) of PrLMs, or their robustness against adversarial attacks.
no code implementations • 23 Aug 2022 • Dongjie Yang, Zhuosheng Zhang, Hai Zhao
Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs).
no code implementations • 23 Aug 2022 • Letian Peng, Zuchao Li, Hai Zhao
In detail, it works on PLMs according to the Replaced Token Detection (RTD) pre-training objective in ELECTRA, in which the corruption detection objective reflects the confidence on contextual integrity that is more relevant to commonsense reasoning than existing probability.
no code implementations • 21 Jul 2022 • Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, Hai Zhao
However, we find that most existing textual adversarial examples are unnatural, which can be easily distinguished by both human and machine.
1 code implementation • 25 Jun 2022 • Hongqiu Wu, Ruixue Ding, Hai Zhao, Pengjun Xie, Fei Huang, Min Zhang
Deep neural models (e. g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness.
Ranked #1 on
Machine Reading Comprehension
on DREAM
Machine Reading Comprehension
Named Entity Recognition (NER)
+4
1 code implementation • 30 Apr 2022 • Letian Peng, Zuchao Li, Hai Zhao
We report the performance of DeBERTaV3 on CommonsenseQA in this report.
no code implementations • 20 Apr 2022 • Bohong Wu, Hai Zhao
If self-supervised learning can be distinguished into two subcategories, generative and contrastive, then most existing studies show that sentence representation learning may more benefit from the contrastive methods but not the generative methods.
1 code implementation • 18 Apr 2022 • Yiyang Li, Hai Zhao, Zhuosheng Zhang
Multi-turn dialogue modeling as a challenging branch of natural language understanding (NLU), aims to build representations for machines to understand human dialogues, which provides a solid foundation for multiple downstream tasks.
no code implementations • 17 Apr 2022 • Yifei Yang, Zuchao Li, Hai Zhao
Thus in order to address this mismatch, this work models the full nested NEs in a sentence as a holistic structure, then we propose a holistic structure parsing algorithm to disclose the entire NEs once for all.
1 code implementation • ACL 2022 • Yilin Zhao, Hai Zhao, Libin Shen, Yinggong Zhao
As a broad and major category in machine reading comprehension (MRC), the generalized goal of discriminative MRC is answer prediction from the given materials.
1 code implementation • Findings (ACL) 2022 • Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, Hai Zhao
We question the validity of current evaluation of robustness of PrLMs based on these non-natural adversarial samples and propose an anomaly detector to evaluate the robustness of PrLMs with more natural adversarial samples.
no code implementations • 4 Jan 2022 • Jiajia Li, Letian Peng, Ping Wang, Zuchao Li, Xueyi Li, Hai Zhao
As the model training on information from users is likely to invade personal privacy, many methods have been proposed to block the learning and memorizing of the sensitive data in raw texts.
1 code implementation • 26 Dec 2021 • Jiawei Wang, Hai Zhao
In detail, our model first focuses on key parts in the given context, and then generates highly related knowledge on such a basis in an association way like human thinking.
no code implementations • NeurIPS 2021 • Kailai Sun, Zuchao Li, Hai Zhao
The pre-trained language model (PrLM) demonstrates domination in downstream natural language processing tasks, in which multilingual PrLM takes advantage of language universality to alleviate the issue of limited resources for low-resource languages.
1 code implementation • EMNLP 2021 • Hongjiang Jing, Zuchao Li, Hai Zhao, Shu Jiang
Therefore, we propose a joint ABSA model, which not only enjoys the benefits of encoder sharing but also focuses on the difference to improve the effectiveness of the model.
no code implementations • 29 Oct 2021 • Letian Peng, Zuchao Li, Hai Zhao
Unsupervised constituency parsing has been explored much but is still far from being solved.
1 code implementation • ACL 2022 • Baorong Huang, Zhuosheng Zhang, Hai Zhao
In this paper, we imitate the human reading process in connecting the anaphoric expressions and explicitly leverage the coreference information of the entities to enhance the word embeddings from the pre-trained language model, in order to highlight the coreference mentions of the entities that must be identified for coreference-intensive question answering in QUOREF, a relatively new dataset that is specifically designed to evaluate the coreference-related performance of a model.
1 code implementation • ACL 2022 • Xinbei Ma, Zhuosheng Zhang, Hai Zhao
Tangled multi-party dialogue contexts lead to challenges for dialogue reading comprehension, where multiple dialogue threads flow simultaneously within a common dialogue record, increasing difficulties in understanding the dialogue history for both human and machine.
no code implementations • ACL 2022 • Bohong Wu, Zhuosheng Zhang, JinYuan Wang, Hai Zhao
In detail, we introduce an in-passage negative sampling strategy to encourage a diverse generation of sentence representations within the same passage.
no code implementations • 11 Oct 2021 • Zhuosheng Zhang, Hai Zhao
In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.
1 code implementation • PACLIC 2021 • Yuchen He, Zhuosheng Zhang, Hai Zhao
Multi-party dialogue machine reading comprehension (MRC) raises an even more challenging understanding goal on dialogue with more than two involved speakers, compared with the traditional plain passage style MRC.
Ranked #1 on
Discourse Parsing
on Molweni
1 code implementation • 4 Oct 2021 • Letian Peng, Zuchao Li, Hai Zhao
By exploiting the property of NDD, we implement a unsupervised and even training-free algorithm for extractive sentence compression.
no code implementations • 29 Sep 2021 • Siru Ouyang, Zhuosheng Zhang, Hai Zhao
Pre-trained language models (PrLMs) have been shown useful for enhancing a broad range of natural language understanding (NLU) tasks.
no code implementations • 14 Sep 2021 • Letian Peng, Zuchao Li, Hai Zhao
Attention scorers have achieved success in parsing tasks like semantic and syntactic dependency parsing.
no code implementations • 9 Sep 2021 • Xinbei Ma, Zhuosheng Zhang, Hai Zhao
Multi-party multi-turn dialogue comprehension brings unprecedented challenges on handling the complicated scenarios from multiple speakers and criss-crossed discourse relationship among speaker-aware utterances.
Ranked #1 on
Question Answering
on Molweni
1 code implementation • Findings (EMNLP) 2021 • Yiyang Li, Hai Zhao
Multi-party dialogue machine reading comprehension (MRC) brings tremendous challenge since it involves multiple speakers at one dialogue, resulting in intricate speaker information flows and noisy dialogue contexts.
Ranked #2 on
Question Answering
on FriendsQA
no code implementations • 31 Aug 2021 • Pengfei Zhu, Xiaoguang Li, Jian Li, Hai Zhao
Open-domain Question Answering (ODQA) has achieved significant results in terms of supervised learning manner.
Machine Reading Comprehension
Open-Domain Question Answering
no code implementations • Findings (EMNLP) 2021 • Rongzhou Bao, Zhuosheng Zhang, Hai Zhao
Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words.
1 code implementation • EMNLP 2021 • Zhuosheng Zhang, Siru Ouyang, Hai Zhao, Masao Utiyama, Eiichiro Sumita
In this work, we propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation to provide a richer dialogue state reference.
no code implementations • 27 Jul 2021 • Zuchao Li, Kevin Parnow, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita
Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant.
no code implementations • 25 Jul 2021 • Bohong Wu, Zhuosheng Zhang, Hai Zhao
Multi-hop reading comprehension (MHRC) requires not only to predict the correct answer span in the given passage, but also to provide a chain of supporting evidences for reasoning interpretability.
1 code implementation • Findings (ACL) 2021 • Yi Xu, Hai Zhao
Pre-trained language models (PrLM) has been shown powerful in enhancing a broad range of downstream tasks including various dialogue related ones.
no code implementations • ACL 2021 • Yian Li, Hai Zhao
Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific levels of linguistic units.
1 code implementation • 30 May 2021 • Rongzhou Bao, Jiayi Wang, Hai Zhao
In detail, we design an auxiliary anomaly detection classifier and adopt a multi-task learning procedure, by which PrLMs are able to distinguish adversarial input samples.
no code implementations • Findings (ACL) 2021 • Kevin Parnow, Zuchao Li, Hai Zhao
In Grammatical Error Correction (GEC), sequence labeling models enjoy fast inference compared to sequence-to-sequence models; however, inference in sequence labeling GEC models is an iterative process, as sentences are passed to the model for multiple rounds of correction, which exposes the model to sentences with progressively fewer errors at each round.
no code implementations • ACL 2021 • Zhuosheng Zhang, Hai Zhao
Pre-trained language models (PrLMs) have demonstrated superior performance due to their strong ability to learn universal language representations from self-supervised pre-training.
1 code implementation • NeurIPS 2021 • Siru Ouyang, Zhuosheng Zhang, Hai Zhao
Therefore, we argue that the natural logic units would be the group of backbone constituents of the sentence such as the subject-verb-object formed "facts", covering both global and local knowledge pieces that are necessary as the basis for logical reasoning.
Ranked #24 on
Reading Comprehension
on ReClor
no code implementations • 20 May 2021 • Zuchao Li, Junru Zhou, Hai Zhao, Kevin Parnow
Constituent and dependency parsing, the two classic forms of syntactic parsing, have been found to benefit from joint training and decoding under a uniform formalism, Head-driven Phrase Structure Grammar (HPSG).
no code implementations • 19 Apr 2021 • Kashif Munir, Hai Zhao, Zuchao Li
To decompose the task as two argument related subtasks, identification and clustering, we propose a pipeline that correspondingly consists of two neural modules.
no code implementations • NeurIPS 2021 • Hongqiu Wu, Hai Zhao, Min Zhang
Beyond the success story of pre-trained language models (PrLMs) in recent natural language processing, they are susceptible to over-fitting due to unusual large model size.
no code implementations • EACL 2021 • Rui Wang, Hai Zhao
Unsupervised cross-lingual language representation initialization methods, together with mechanisms such as denoising and back-translation, have advanced unsupervised neural machine translation (UNMT), which has achieved impressive results.
no code implementations • 4 Mar 2021 • Zhuosheng Zhang, Hai Zhao
In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.
no code implementations • 11 Feb 2021 • Zuchao Li, Zhuosheng Zhang, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita
In this paper, we propose explicit and implicit text compression approaches to enhance the Transformer encoding and evaluate models using this approach on several typical downstream tasks that rely on the encoding heavily.
no code implementations • 10 Feb 2021 • Zhuosheng Zhang, Junlong Li, Hai Zhao
Experimental results on four dialogue comprehension benchmark tasks show that our proposed model achieves great improvements on baselines.
no code implementations • 16 Jan 2021 • Sufeng Duan, Hai Zhao
We also propose a revisited multigraph called Multi-order-Graph (MoG) based on our explanation to model the graph structures in the SAN-based model as subgraphs in MoG and convert the encoding of SAN-based model to the generation of MoG.
no code implementations • 1 Jan 2021 • Zuchao Li, Kevin Barry Parnow, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita
Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant.
no code implementations • 1 Jan 2021 • Fengshun Xiao, Zuchao Li, Hai Zhao
In neural machine translation (NMT), data augmentation methods such as back-translation make it possible to use extra monolingual data to help improve translation performance, while it needs extra training data and the in-domain monolingual data is not always available.
no code implementations • 1 Jan 2021 • Rongzhou Bao, Zhuosheng Zhang, Hai Zhao
Instead of too early fixing the linguistic unit input as nearly all previous work did, we propose a novel method that combines span-level information into the representations generated by PrLMs during fine-tuning phase for better flexibility.
no code implementations • 1 Jan 2021 • Jeonghyeok Park, Hai Zhao
In this paper, we propose a novel method that infuses prior word alignment information into neural machine translation (NMT) to provide hints or guidelines for the target sentence at running time.
no code implementations • 30 Dec 2020 • Rongzhou Bao, Jiayi Wang, Zhuosheng Zhang, Hai Zhao
By substituting complex words with simple alternatives, lexical simplification (LS) is a recognized method to reduce such lexical diversity, and therefore to improve the understandability of sentences.
no code implementations • 30 Dec 2020 • Zhuosheng Zhang, Haojie Yu, Hai Zhao, Rui Wang, Masao Utiyama
Word representation is a fundamental component in neural language understanding models.
1 code implementation • Findings (ACL) 2021 • Hongqiu Wu, Hai Zhao, Min Zhang
Code summarization (CS) is becoming a promising area in recent language understanding, which aims to generate sensible human language automatically for programming language in the format of source code, serving in the most convenience of programmer developing.
1 code implementation • Findings (ACL) 2021 • Siru Ouyang, Zhuosheng Zhang, Hai Zhao
Conversational Machine Reading (CMR) aims at answering questions in a complicated manner.
no code implementations • 28 Dec 2020 • Yian Li, Hai Zhao
We present a universal representation model, BURT (BERT-inspired Universal Representation from learning meaningful segmenT), to encode different levels of linguistic unit into the same vector space.
no code implementations • 27 Dec 2020 • Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang
In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention.
no code implementations • 27 Dec 2020 • Kashif Munir, Hai Zhao, Zuchao Li
Semantic role labeling (SRL) aims at elaborating the meaning of a sentence by forming a predicate-argument structure.
no code implementations • 24 Dec 2020 • Kailai Sun, Zuchao Li, Hai Zhao
As it is unlikely to obtain a treebank for every human language, in this work, we propose an effective cross-lingual UD parsing framework for transferring parser from only one source monolingual treebank to any other target languages without treebank available.
1 code implementation • 7 Dec 2020 • Yilin Zhao, Zhuosheng Zhang, Hai Zhao
Thus we propose a novel reference-based knowledge enhancement model called Reference Knowledgeable Network (RekNet), which simulates human reading strategies to refine critical information from the passage and quote explicit knowledge in necessity.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Junru Zhou, Zhuosheng Zhang, Hai Zhao, Shuailiang Zhang
Besides, LIMIT-BERT takes a semi-supervised learning strategy to offer the same large amount of linguistics task data as that for the language model training.
no code implementations • 11 Oct 2020 • Zuchao Li, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita
In this paper, we introduced our joint team SJTU-NICT 's participation in the WMT 2020 machine translation shared task.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zuchao Li, Hai Zhao, Rui Wang, Kevin Parnow
Semantic role labeling is primarily used to identify predicates, arguments, and their semantic relationships.
1 code implementation • 26 Sep 2020 • Yi Xu, Hai Zhao, Zhuosheng Zhang
In the retrieval-based multi-turn dialogue modeling, it remains a challenge to select the most appropriate response according to extracting salient features in context utterances.
no code implementations • 16 Sep 2020 • Shu Jiang, Hai Zhao, Zuchao Li, Bao-liang Lu
Standard neural machine translation (NMT) is on the assumption of document-level context independent.
no code implementations • 16 Sep 2020 • Sufeng Duan, Hai Zhao, Rui Wang
In the light of the current NMT models more or less capture graph information among the sequence in a latent way, we present a graph-to-sequence model facilitating explicit graph information capturing.
no code implementations • 15 Sep 2020 • Junjie Yang, Zhuosheng Zhang, Hai Zhao
Generative machine reading comprehension (MRC) requires a model to generate well-formed answers.
1 code implementation • 14 Sep 2020 • Longxiang Liu, Zhuosheng Zhang, Hai Zhao, Xi Zhou, Xiang Zhou
A multi-turn dialogue is composed of multiple utterances from two or more different speaker roles.
no code implementations • 14 Sep 2020 • Zhuosheng Zhang, Yiqing Zhang, Hai Zhao, Xi Zhou, Xiang Zhou
This paper presents a novel method to generate answers for non-extraction machine reading comprehension (MRC) tasks whose answers cannot be simply extracted as one span from the given passages.
no code implementations • CL (ACL) 2021 • Zuchao Li, Hai Zhao, Shexia He, Jiaxun Cai
Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence.
no code implementations • 10 Sep 2020 • Yian Li, Hai Zhao
Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific level of linguistic unit, which cause great inconvenience when being confronted with handling multiple layers of linguistic objects in a unified way.
1 code implementation • 10 Sep 2020 • Junlong Li, Zhuosheng Zhang, Hai Zhao
Pre-trained language models (PrLMs) have achieved great success on a wide range of natural language processing tasks by virtue of the universal language representation ability obtained by self-supervised learning on a large corpus.
1 code implementation • 13 May 2020 • Zhuosheng Zhang, Hai Zhao, Rui Wang
In this survey, we provide a comprehensive and comparative review on MRC covering overall research topics about 1) the origin and development of MRC and CLM, with a particular focus on the role of CLMs; 2) the impact of MRC and CLM to the NLP community; 3) the definition, datasets, and evaluation of MRC; 4) general MRC architecture and technical methods in the view of two-stage Encoder-Decoder solving architecture from the insights of the cognitive process of humans; 5) previous highlights, emerging topics, and our empirical analysis, among which we especially focus on what works in different periods of MRC researches.
1 code implementation • ACL 2020 • Ying Luo, Hai Zhao
In this paper, we propose a novel bipartite flat-graph network (BiFlaG) for nested named entity recognition (NER), which contains two subgraph modules: a flat NER module for outermost entities and a graph module for all the entities located in inner layers.
Ranked #6 on
Nested Mention Recognition
on ACE 2005
1 code implementation • ICLR 2020 • Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, Hai Zhao
Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations.
no code implementations • ICLR 2020 • Zuchao Li, Rui Wang, Kehai Chen, Masso Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao
However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect.
no code implementations • 30 Apr 2020 • Sufeng Duan, Juncheng Cao, Hai Zhao
In this paper, we thus propose the capsule-Transformer, which extends the linear transformation into a more general capsule routing algorithm by taking SAN as a special case of capsule network.
no code implementations • 29 Apr 2020 • Sufeng Duan, Hai Zhao, Dong-dong Zhang, Rui Wang
Data augmentation is an effective performance enhancement in neural machine translation (NMT) by generating additional bilingual data.
no code implementations • 29 Apr 2020 • Junlong Li, Zhuosheng Zhang, Hai Zhao
In this paper, the relevance of each turn to the question are calculated to choose key turns.
no code implementations • 29 Apr 2020 • Yian Li, Hai Zhao
Pre-trained contextualized language models such as BERT have shown great effectiveness in a wide range of downstream Natural Language Processing (NLP) tasks.
no code implementations • 28 Apr 2020 • Shuailiang Zhang, Hai Zhao, Junru Zhou
Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues through an attention mechanism.
no code implementations • NAACL 2021 • Mingxuan Wang, Hongxiao Bai, Hai Zhao, Lei LI
Neural machine translation~(NMT) is ineffective for zero-resource languages.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Zuchao Li, Hai Zhao, Rui Wang, Masao Utiyama, Eiichiro Sumita
Further enriching the idea of pivot translation by extending the use of parallel corpora beyond the source-target paradigm, we propose a new reference language-based framework for UNMT, RUNMT, in which the reference language only shares a parallel corpus with the source, but this corpus still indicates a signal clear enough to help the reconstruction training of UNMT through a proposed reference agreement mechanism.
2 code implementations • 27 Jan 2020 • Zhuosheng Zhang, Junjie Yang, Hai Zhao
Inspired by how humans solve reading comprehension questions, we proposed a retrospective reader (Retro-Reader) that integrates two stages of reading and verification strategies: 1) sketchy reading that briefly investigates the overall interactions of passage and question, and yield an initial judgment; 2) intensive reading that verifies the answer and gives the final prediction.
Ranked #7 on
Question Answering
on SQuAD2.0
3 code implementations • 26 Jan 2020 • Pengfei Zhu, Hai Zhao, Xiaoguang Li
Multi-choice Machine Reading Comprehension (MRC) requires model to decide the correct answer from a set of answer options when given a passage and a question.
Ranked #3 on
Reading Comprehension
on RACE
no code implementations • 1 Jan 2020 • Pengfei Zhu, Hai Zhao, Xiaoguang Li
Multi-choice Machine Reading Comprehension (MRC) requires model to decide the correct answer from a set of answer options when given a passage and a question.
1 code implementation • 27 Dec 2019 • Zuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao
In this paper, we propose an explicit sentence compression method to enhance the source sentence representation for NMT.
2 code implementations • 25 Nov 2019 • Jeonghyeok Park, Hai Zhao
Korean-Chinese is a low resource language pair, but Korean and Chinese have a lot in common in terms of vocabulary.
1 code implementation • 20 Nov 2019 • Zuchao Li, Hai Zhao, Kevin Parnow
Most syntactic dependency parsing models may fall into one of two categories: transition- and graph-based models.
no code implementations • 7 Nov 2019 • Zuchao Li, Hai Zhao, Junru Zhou, Kevin Parnow, Shexia He
In this paper, we define a new cross-style semantic role label convention and propose a new cross-style joint optimization model designed around the most basic linguistic meaning of a semantic role, providing a solution to make the results of the two styles more comparable and allowing both formalisms of SRL to benefit from their natural connections in both linguistics and computation.
no code implementations • 7 Nov 2019 • Zhuosheng Zhang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Hai Zhao
We present a universal framework to model contextualized sentence representations with visual awareness that is motivated to overcome the shortcomings of the multimodal parallel data with manual annotations.
1 code implementation • 6 Nov 2019 • Ying Luo, Fengshun Xiao, Hai Zhao
In this paper, we address these two deficiencies and propose a model augmented with hierarchical contextualized representation: sentence-level representation and document-level representation.
Ranked #12 on
Named Entity Recognition (NER)
on Ontonotes v5 (English)
(using extra training data)
no code implementations • 5 Nov 2019 • Junjie Yang, Hai Zhao
Transformer-based pre-trained language models have proven to be effective for learning contextualized language representation.
no code implementations • CONLL 2019 • Zuchao Li, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita
This paper describes our SJTU-NICT{'}s system for participating in the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL).
no code implementations • CONLL 2019 • Hongxiao Bai, Hai Zhao
This paper describes the system of our team SJTU for our participation in the CoNLL 2019 Shared Task: Cross-Framework Meaning Representation Parsing.
1 code implementation • EMNLP 2020 • Sufeng Duan, Hai Zhao
Taking greedy decoding algorithm as it should be, this work focuses on further strengthening the model itself for Chinese word segmentation (CWS), which results in an even more fast and more accurate CWS model.
no code implementations • 31 Oct 2019 • Shu Jiang, Rui Wang, Zuchao Li, Masao Utiyama, Kehai Chen, Eiichiro Sumita, Hai Zhao, Bao-liang Lu
Most existing document-level NMT approaches are satisfied with a smattering sense of global document-level information, while this work focuses on exploiting detailed document-level context in terms of a memory network.
no code implementations • 31 Oct 2019 • Junru Zhou, Zhuosheng Zhang, Hai Zhao, Shuailiang Zhang
In this paper, we present a Linguistic Informed Multi-Task BERT (LIMIT-BERT) for learning language representations across multiple linguistic tasks by Multi-Task Learning (MTL).
no code implementations • 18 Sep 2019 • Jiangtong Li, Hai Zhao, Zuchao Li, Wei Bi, Xiaojiang Liu
Embedding from Language Models (ELMo) has shown to be effective for improving many natural language processing (NLP) tasks, and ELMo takes character information to compose word representation to train language models. However, the character is an insufficient and unnatural linguistic unit for word representation. Thus we introduce Embedding from Subword-aware Language Models (ESuLMo) which learns word representation from subwords using unsupervised segmentation over words. We show that ESuLMo can enhance four benchmark NLP tasks more effectively than ELMo, including syntactic dependency parsing, semantic role labeling, implicit discourse relation recognition and textual entailment, which brings a meaningful improvement over ELMo.
1 code implementation • 5 Sep 2019 • Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou
The latest work on language representations carefully integrates contextualized features into language model training, which enables a series of success especially in various machine reading comprehension and natural language inference tasks.
Ranked #6 on
Natural Language Inference
on SNLI
no code implementations • 3 Sep 2019 • Zhuosheng Zhang, Bingjie Tang, Zuchao Li, Hai Zhao
This work models named entity distribution from a way of visualizing topological structure of embedding space, so that we make an assumption that most, if not all, named entities (NEs) for a language tend to aggregate together to be accommodated by a specific hypersphere in embedding space.
no code implementations • 3 Sep 2019 • Zhuosheng Zhang, Zhen Meng, Hai Zhao
This paper presents a smart sliding Chinese pinyin Input Method Editor (IME) for touchscreen devices which allows user finger sliding from one key to another on the touchscreen instead of tapping keys one by one, while the target Chinese character sequence will be predicted during the sliding process to help user input Chinese characters efficiently.
1 code implementation • IJCNLP 2019 • Shexia He, Zuchao Li, Hai Zhao
Recently, semantic role labeling (SRL) has earned a series of success with even higher performance improvements, which can be mainly attributed to syntactic integration and enhanced word representation.
no code implementations • EMNLP 2020 • Ying Luo, Hai Zhao, Junlang Zhan
Deep neural network models have helped named entity (NE) recognition achieve amazing performance without handcrafting features.
no code implementations • 31 Aug 2019 • Ying Luo, Hai Zhao, Zhuosheng Zhang, Bingjie Tang
For monolingual cases, the proposed named entity model gives an open description of diverse named entity types and different languages.
2 code implementations • 30 Aug 2019 • Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, Xiang Zhou
Multi-choice reading comprehension is a challenging task to select an answer from a set of candidate options when given passage and question.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Junru Zhou, Zuchao Li, Hai Zhao
Both syntactic and semantic structures are key linguistic contextual clues, in which parsing the latter has been well shown beneficial from parsing the former.
no code implementations • 29 Aug 2019 • Hongxiao Bai, Hai Zhao, Junhan Zhao
As implicit discourse relation recognizer has to carefully tackle the semantic similarity of the given sentence pairs and the severe data sparsity issue exists in the meantime, it is supposed to be beneficial from mastering the entire training data.
no code implementations • 22 Aug 2019 • Zuchao Li, Hai Zhao, Yingting Wu, Fengshun Xiao, Shu Jiang
Our experiments indicate that switching to the DSD loss after the convergence of ML training helps models escape local optima and stimulates stable performance improvements.
no code implementations • 18 Aug 2019 • Junru Zhou, Shuailiang Zhang, Hai Zhao
Constituent and dependency representation for syntactic structure share a lot of linguistic and computational characteristics, this paper thus makes the first attempt by introducing a new model that is capable of parsing constituent and dependency at the same time, so that lets either of the parsers enhance each other.
1 code implementation • 14 Aug 2019 • Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang
In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention.
Ranked #5 on
Question Answering
on SQuAD2.0 dev
1 code implementation • NAACL 2019 • Chaoyu Guan, Yuhao Cheng, Hai Zhao
Semantic role labeling (SRL) is a task to recognize all the predicate-argument pairs of a sentence, which has been in a performance improvement bottleneck after a series of latest works were presented.
1 code implementation • ACL 2019 • Junru Zhou, Hai Zhao
In details, we report 96. 33 F1 of constituent parsing and 97. 20\% UAS of dependency parsing on PTB.
Ranked #4 on
Constituency Parsing
on Penn Treebank
no code implementations • ACL 2019 • Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, Kehai Chen
To integrate different segmentations with the state-of-the-art NMT model, Transformer, we propose lattice-based encoders to explore effective word or subword representation in an automatic way during training.
no code implementations • NAACL 2019 • Pengshuai Li, Xinsong Zhang, Weijia Jia, Hai Zhao
Distant supervision has been widely used in relation extraction tasks without hand-labeled datasets recently.
no code implementations • ICLR 2019 • Huan Zhang, Hai Zhao
Sequence to sequence (seq2seq) models have become a popular framework for neural sequence prediction.
no code implementations • 22 Apr 2019 • Shu Jiang, Zhuosheng Zhang, Hai Zhao, Jiangtong Li, Yang Yang, Bao-liang Lu, Ning Xia
Chemical reaction practicality is the core task among all symbol intelligence based chemical information processing, for example, it provides indispensable clue for further automatic synthesis route inference.
1 code implementation • 30 Jan 2019 • Junlang Zhan, Hai Zhao
Open information extraction (Open IE) is a challenging task especially due to its brittle data basis.
Ranked #5 on
Open Information Extraction
on OIE2016
no code implementations • 27 Jan 2019 • Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, Xiang Zhou
Multi-choice reading comprehension is a challenging task that requires complex reasoning procedure.
Ranked #3 on
Question Answering
on RACE
1 code implementation • ICLR 2019 • Junlang Zhan, Hai Zhao
Chemical information extraction is to convert chemical knowledge in text into true chemical database, which is a text processing task heavily relying on chemical compound name identification and standardization.
no code implementations • 18 Jan 2019 • Hai Zhao, Deng Cai, Changning Huang, Chunyu Kit
This paper reviews the development of Chinese word segmentation (CWS) in the most recent decade, 2007-2017.
1 code implementation • 16 Jan 2019 • Zuchao Li, Shexia He, Hai Zhao, Yiqing Zhang, Zhuosheng Zhang, Xi Zhou, Xiang Zhou
Semantic role labeling (SRL) aims to discover the predicateargument structure of a sentence.
Ranked #9 on
Semantic Role Labeling
on CoNLL 2005
1 code implementation • ACL 2019 • Zhuosheng Zhang, Yafang Huang, Hai Zhao
Pinyin-to-character (P2C) conversion is the core component of pinyin-based Chinese input method engine (IME).
no code implementations • 11 Nov 2018 • Xinsong Zhang, Pengshuai Li, Weijia Jia, Hai Zhao
To disclose overlapped multiple relations from a sentence still keeps challenging.
1 code implementation • 8 Nov 2018 • Zuchao Li, Jiaxun Cai, Hai Zhao
Easy-first parsing relies on subtree re-ranking to build the complete parse tree.
1 code implementation • 6 Nov 2018 • Zhuosheng Zhang, Hai Zhao, Kangwei Ling, Jiangtong Li, Zuchao Li, Shexia He, Guohong Fu
Representation learning is the foundation of machine reading comprehension and inference.
no code implementations • 6 Nov 2018 • Sufeng Duan, Jiangtong Li, Hai Zhao
Rapidly developed neural models have achieved competitive performance in Chinese word segmentation (CWS) as their traditional counterparts.
1 code implementation • CONLL 2018 • Zuchao Li, Shexia He, Zhuosheng Zhang, Hai Zhao
This paper describes the system of team LeisureX in the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.
no code implementations • CONLL 2018 • Yingting Wu, Hai Zhao, Jia-Jun Tong
This paper describes the system of our team Phoenix for participating CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.
1 code implementation • EMNLP 2018 • Zuchao Li, Shexia He, Jiaxun Cai, Zhuosheng Zhang, Hai Zhao, Gongshen Liu, Linlin Li, Luo Si
Semantic role labeling (SRL) aims to recognize the predicate-argument structure of a sentence.
no code implementations • 8 Sep 2018 • Zhuosheng Zhang, Shexia He, Zuchao Li, Hai Zhao
The goal of semantic role labeling (SRL) is to discover the predicate-argument structure of a sentence, which plays a critical role in deep processing of natural language.
no code implementations • 8 Sep 2018 • Zhuosheng Zhang, Yuwei Wu, Zuchao Li, Hai Zhao
Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL) task.
Ranked #9 on
Natural Language Inference
on SNLI
Machine Reading Comprehension
Natural Language Understanding
+1
1 code implementation • EMNLP 2018 • Yafang Huang, Hai Zhao
Chinese pinyin input method engine (IME) converts pinyin into character so that Chinese characters can be conveniently inputted into computer through common keyboard.
1 code implementation • EMNLP 2018 • Zhisong Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita, Hai Zhao
In Neural Machine Translation (NMT), the decoder can capture the features of the entire prediction history with neural connections and representations.
1 code implementation • 11 Aug 2018 • Jiaxun Cai, Shexia He, Zuchao Li, Hai Zhao
Semantic role labeling (SRL) is to recognize the predicate-argument structure of a sentence, including subtasks of predicate disambiguation and argument labeling.
no code implementations • COLING 2018 • Pengfei Zhu, Zhuosheng Zhang, Jiangtong Li, Yafang Huang, Hai Zhao
Traditional chatbots usually need a mass of human dialogue data, especially when using supervised machine learning method.
no code implementations • 7 Aug 2018 • Zhuosheng Zhang, Yafang Huang, Pengfei Zhu, Hai Zhao
Machine reading comprehension is a task to model relationship between passage and query.
1 code implementation • COLING 2018 • Zuchao Li, Jiaxun Cai, Shexia He, Hai Zhao
This paper presents a sequence to sequence (seq2seq) dependency parser by directly predicting the relative position of head for each given word, which therefore results in a truly end-to-end seq2seq dependency parser for the first time.
no code implementations • COLING 2018 • Jiaxun Cai, Shexia He, Zuchao Li, Hai Zhao
Semantic role labeling (SRL) is to recognize the predicate-argument structure of a sentence, including subtasks of predicate disambiguation and argument labeling.
1 code implementation • 25 Jul 2018 • Yingting Wu, Hai Zhao
For different language pairs, word-level neural machine translation (NMT) models with a fixed-size vocabulary suffer from the same problem of representing out-of-vocabulary (OOV) words.
1 code implementation • COLING 2018 • Hongxiao Bai, Hai Zhao
Implicit discourse relation recognition is a challenging task as the relation prediction without explicit connectives in discourse parsing needs understanding of text spans and cannot be easily derived from surface features from the input sentence pairs.
1 code implementation • ACL 2018 • Shexia He, Zuchao Li, Hai Zhao, Hongxiao Bai
Semantic role labeling (SRL) is dedicated to recognizing the predicate-argument structure of a sentence.
no code implementations • ACL 2018 • Yafang Huang, Zuchao Li, Zhuosheng Zhang, Hai Zhao
Chinese pinyin input method engine (IME) lets user conveniently input Chinese into a computer by typing pinyin through the common keyboard.
1 code implementation • COLING 2018 • Zhuosheng Zhang, Hai Zhao
Answering questions from university admission exams (Gaokao in Chinese) is a challenging AI task since it requires effective representation to capture complicated semantic relations between questions and answers.
1 code implementation • COLING 2018 • Zhuosheng Zhang, Yafang Huang, Hai Zhao
Representation learning is the foundation of machine reading comprehension.
1 code implementation • COLING 2018 • Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, Gongshen Liu
In this paper, we formulate previous utterances into context using a proposed deep utterance aggregation model to form a fine-grained context representation.
Ranked #11 on
Conversational Response Selection
on E-commerce
no code implementations • SEMEVAL 2018 • Zhuosheng Zhang, Jiangtong Li, Hai Zhao, Bingjie Tang
This paper describes a hypernym discovery system for our participation in the SemEval-2018 Task 9, which aims to discover the best (set of) candidate hypernyms for input concepts or entities, given the search space of a pre-defined vocabulary.
Ranked #5 on
Hypernym Discovery
on Music domain
no code implementations • ACL 2018 • Lianhui Qin, Lemao Liu, Victoria Bi, Yan Wang, Xiaojiang Liu, Zhiting Hu, Hai Zhao, Shuming Shi
Comments of online articles provide extended views and improve user engagement.