Search Results for author: Hai Zhao

Found 215 papers, 75 papers with code

Syntax in End-to-End Natural Language Processing

no code implementations EMNLP (ACL) 2021 Hai Zhao, Rui Wang, Kehai Chen

This tutorial surveys the latest technical progress of syntactic parsing and the role of syntax in end-to-end natural language processing (NLP) tasks, in which semantic role labeling (SRL) and machine translation (MT) are the representative NLP tasks that have always been beneficial from informative syntactic clues since a long time ago, though the advance from end-to-end deep learning models shows new results.

Machine Translation NMT +2

Nested Named Entity Recognition as Corpus Aware Holistic Structure Parsing

1 code implementation COLING 2022 Yifei Yang, Zuchao Li, Hai Zhao

Thus in order to address this mismatch, this work models the full nested NEs in a sentence as a holistic structure, then we propose a holistic structure parsing algorithm to disclose the entire NEs once for all.

Domain Adaptation named-entity-recognition +3

Aspect-based Sentiment Analysis as Machine Reading Comprehension

1 code implementation COLING 2022 Yifei Yang, Hai Zhao

Existing studies typically handle aspect-based sentiment analysis by stacking multiple neural modules, which inevitably result in severe error propagation.

Aspect-Based Sentiment Analysis (ABSA) Machine Reading Comprehension

Modeling Hierarchical Reasoning Chains by Linking Discourse Units and Key Phrases for Reading Comprehension

no code implementations COLING 2022 Jialin Chen, Zhuosheng Zhang, Hai Zhao

Machine reading comprehension (MRC) poses new challenges to logical reasoning, which aims to understand the implicit logical relations entailed in the given contexts and perform inference over them.

Logical Reasoning Machine Reading Comprehension +2

Restricted or Not: A General Training Framework for Neural Machine Translation

no code implementations ACL 2022 Zuchao Li, Masao Utiyama, Eiichiro Sumita, Hai Zhao

Although this can satisfy the requirements overall, it usually requires a larger beam size and far longer decoding time than unrestricted translation, which limits the concurrent processing ability of the translation model in deployment, and thus its practicality.

Machine Translation Translation

What Works and Doesn’t Work, A Deep Decoder for Neural Machine Translation

no code implementations Findings (ACL) 2022 Zuchao Li, Yiran Wang, Masao Utiyama, Eiichiro Sumita, Hai Zhao, Taro Watanabe

Inspired by this discovery, we then propose approaches to improving it, with respect to model structure and model training, to make the deep decoder practical in NMT.

Language Modelling Machine Translation +2

What If Sentence-hood is Hard to Define: A Case Study in Chinese Reading Comprehension

no code implementations Findings (EMNLP) 2021 Jiawei Wang, Hai Zhao, Yinggong Zhao, Libin Shen

Machine reading comprehension (MRC) is a challenging NLP task for it requires to carefully deal with all linguistic granularities from word, sentence to passage.

Chinese Reading Comprehension Machine Reading Comprehension

Multimodal Chain-of-Thought Reasoning in Language Models

3 code implementations2 Feb 2023 Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, Alex Smola

Large language models (LLMs) have shown impressive performance on complex reasoning by leveraging chain-of-thought (CoT) prompting to generate intermediate reasoning chains as the rationale to infer the answer.

Language Modelling Science Question Answering

Channel-aware Decoupling Network for Multi-turn Dialogue Comprehension

no code implementations10 Jan 2023 Zhuosheng Zhang, Hai Zhao, Longxiang Liu

We decouple the contextualized word representations by masking mechanisms in Transformer-based PrLM, making each word only focus on the words in current utterance, other utterances, and two speaker roles (i. e., utterances of sender and utterances of the receiver), respectively.

Self-Prompting Large Language Models for Open-Domain QA

no code implementations16 Dec 2022 Junlong Li, Zhuosheng Zhang, Hai Zhao

Open-Domain Question Answering (ODQA) requires models to answer factoid questions with no context given.

Open-Domain Question Answering Retrieval

Language Model Pre-training on True Negatives

no code implementations1 Dec 2022 Zhuosheng Zhang, Hai Zhao, Masao Utiyama, Eiichiro Sumita

Discriminative pre-trained language models (PLMs) learn to predict original texts from intentionally corrupted ones.

Language Modelling

Forging Multiple Training Objectives for Pre-trained Language Models via Meta-Learning

2 code implementations19 Oct 2022 Hongqiu Wu, Ruixue Ding, Hai Zhao, Boli Chen, Pengjun Xie, Fei Huang, Min Zhang

Multiple pre-training objectives fill the vacancy of the understanding capability of single-objective language modeling, which serves the ultimate purpose of pre-trained language models (PrLMs), generalizing well on a mass of scenarios.

Language Modelling Meta-Learning

Sentence Representation Learning with Generative Objective rather than Contrastive Objective

1 code implementation16 Oct 2022 Bohong Wu, Hai Zhao

Though offering amazing contextualized token-level representations, current pre-trained language models take less attention on accurately acquiring sentence-level representation during their self-supervised pre-training.

Representation Learning Retrieval +3

Towards End-to-End Open Conversational Machine Reading

no code implementations13 Oct 2022 Sizhe Zhou, Siru Ouyang, Zhuosheng Zhang, Hai Zhao

In open-retrieval conversational machine reading (OR-CMR) task, machines are required to do multi-turn question answering given dialogue history and a textual knowledge base.

Decision Making Question Answering +4

Task Compass: Scaling Multi-task Pre-training with Task Prefix

1 code implementation12 Oct 2022 Zhuosheng Zhang, Shuohang Wang, Yichong Xu, Yuwei Fang, Wenhao Yu, Yang Liu, Hai Zhao, Chenguang Zhu, Michael Zeng

Leveraging task-aware annotated data as supervised signals to assist with self-supervised learning on large-scale unlabeled data has become a new trend in pre-training language models.

Data Augmentation Multi-Task Learning +1

Instance Regularization for Discriminative Language Model Pre-training

1 code implementation11 Oct 2022 Zhuosheng Zhang, Hai Zhao, Ming Zhou

They treat training instances equally throughout the training process, with little attention on the individual contribution of those instances.

Denoising Language Modelling +2

Semantic-Preserving Adversarial Code Comprehension

1 code implementation COLING 2022 Yiyang Li, Hongqiu Wu, Hai Zhao

Based on the tremendous success of pre-trained language models (PrLMs) for source code comprehension tasks, current literature studies either ways to further improve the performance (generalization) of PrLMs, or their robustness against adversarial attacks.

Learning Better Masking for Better Language Model Pre-training

no code implementations23 Aug 2022 Dongjie Yang, Zhuosheng Zhang, Hai Zhao

Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs).

Denoising Language Modelling +1

Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense Reasoning

no code implementations23 Aug 2022 Letian Peng, Zuchao Li, Hai Zhao

In detail, it works on PLMs according to the Replaced Token Detection (RTD) pre-training objective in ELECTRA, in which the corruption detection objective reflects the confidence on contextual integrity that is more relevant to commonsense reasoning than existing probability.

Language Modelling Question Answering +1

Rethinking Textual Adversarial Defense for Pre-trained Language Models

no code implementations21 Jul 2022 Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, Hai Zhao

However, we find that most existing textual adversarial examples are unnatural, which can be easily distinguished by both human and machine.

Adversarial Attack Adversarial Defense

Adversarial Self-Attention for Language Understanding

1 code implementation25 Jun 2022 Hongqiu Wu, Ruixue Ding, Hai Zhao, Pengjun Xie, Fei Huang, Min Zhang

Deep neural models (e. g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness.

Machine Reading Comprehension Named Entity Recognition (NER) +4

Generative or Contrastive? Phrase Reconstruction for Better Sentence Representation Learning

no code implementations20 Apr 2022 Bohong Wu, Hai Zhao

If self-supervised learning can be distinguished into two subcategories, generative and contrastive, then most existing studies show that sentence representation learning may more benefit from the contrastive methods but not the generative methods.

Contrastive Learning Representation Learning +4

Back to the Future: Bidirectional Information Decoupling Network for Multi-turn Dialogue Modeling

1 code implementation18 Apr 2022 Yiyang Li, Hai Zhao, Zhuosheng Zhang

Multi-turn dialogue modeling as a challenging branch of natural language understanding (NLU), aims to build representations for machines to understand human dialogues, which provides a solid foundation for multiple downstream tasks.

Natural Language Understanding

Nested Named Entity Recognition as Holistic Structure Parsing

no code implementations17 Apr 2022 Yifei Yang, Zuchao Li, Hai Zhao

Thus in order to address this mismatch, this work models the full nested NEs in a sentence as a holistic structure, then we propose a holistic structure parsing algorithm to disclose the entire NEs once for all.

Domain Adaptation named-entity-recognition +3

Lite Unified Modeling for Discriminative Reading Comprehension

1 code implementation ACL 2022 Yilin Zhao, Hai Zhao, Libin Shen, Yinggong Zhao

As a broad and major category in machine reading comprehension (MRC), the generalized goal of discriminative MRC is answer prediction from the given materials.

Machine Reading Comprehension Multi-Choice MRC +1

Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model

1 code implementation Findings (ACL) 2022 Jiayi Wang, Rongzhou Bao, Zhuosheng Zhang, Hai Zhao

We question the validity of current evaluation of robustness of PrLMs based on these non-natural adversarial samples and propose an anomaly detector to evaluate the robustness of PrLMs with more natural adversarial samples.

Data Augmentation Language Modelling

Semantics-Preserved Distortion for Personal Privacy Protection in Information Management

no code implementations4 Jan 2022 Jiajia Li, Letian Peng, Ping Wang, Zuchao Li, Xueyi Li, Hai Zhao

As the model training on information from users is likely to invade personal privacy, many methods have been proposed to block the learning and memorizing of the sensitive data in raw texts.

Constituency Parsing Federated Learning +5

ArT: All-round Thinker for Unsupervised Commonsense Question-Answering

1 code implementation26 Dec 2021 Jiawei Wang, Hai Zhao

In detail, our model first focuses on key parts in the given context, and then generates highly related knowledge on such a basis in an association way like human thinking.

Association Question Answering

Multilingual Pre-training with Universal Dependency Learning

no code implementations NeurIPS 2021 Kailai Sun, Zuchao Li, Hai Zhao

The pre-trained language model (PrLM) demonstrates domination in downstream natural language processing tasks, in which multilingual PrLM takes advantage of language universality to alleviate the issue of limited resources for low-resource languages.

Dependency Parsing Natural Language Understanding +1

Seeking Common but Distinguishing Difference, A Joint Aspect-based Sentiment Analysis Model

1 code implementation EMNLP 2021 Hongjiang Jing, Zuchao Li, Hai Zhao, Shu Jiang

Therefore, we propose a joint ABSA model, which not only enjoys the benefits of encoder sharing but also focuses on the difference to improve the effectiveness of the model.

Aspect-Based Sentiment Analysis (ABSA) Term Extraction

Tracing Origins: Coreference-aware Machine Reading Comprehension

1 code implementation ACL 2022 Baorong Huang, Zhuosheng Zhang, Hai Zhao

In this paper, we imitate the human reading process in connecting the anaphoric expressions and explicitly leverage the coreference information of the entities to enhance the word embeddings from the pre-trained language model, in order to highlight the coreference mentions of the entities that must be identified for coreference-intensive question answering in QUOREF, a relatively new dataset that is specifically designed to evaluate the coreference-related performance of a model.

Language Modelling Machine Reading Comprehension +2

Structural Characterization for Dialogue Disentanglement

1 code implementation ACL 2022 Xinbei Ma, Zhuosheng Zhang, Hai Zhao

Tangled multi-party dialogue contexts lead to challenges for dialogue reading comprehension, where multiple dialogue threads flow simultaneously within a common dialogue record, increasing difficulties in understanding the dialogue history for both human and machine.

Disentanglement Feature Engineering +1

Sentence-aware Contrastive Learning for Open-Domain Passage Retrieval

no code implementations ACL 2022 Bohong Wu, Zhuosheng Zhang, JinYuan Wang, Hai Zhao

In detail, we introduce an in-passage negative sampling strategy to encourage a diverse generation of sentence representations within the same passage.

Contrastive Learning Passage Retrieval +1

Advances in Multi-turn Dialogue Comprehension: A Survey

no code implementations11 Oct 2021 Zhuosheng Zhang, Hai Zhao

In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.

Reading Comprehension

Multi-tasking Dialogue Comprehension with Discourse Parsing

1 code implementation PACLIC 2021 Yuchen He, Zhuosheng Zhang, Hai Zhao

Multi-party dialogue machine reading comprehension (MRC) raises an even more challenging understanding goal on dialogue with more than two involved speakers, compared with the traditional plain passage style MRC.

Discourse Parsing Machine Reading Comprehension +1

A Novel Metric for Evaluating Semantics Preservation

1 code implementation4 Oct 2021 Letian Peng, Zuchao Li, Hai Zhao

By exploiting the property of NDD, we implement a unsupervised and even training-free algorithm for extractive sentence compression.

Language Modelling Predicate Detection +3

Logic Pre-Training of Language Models

no code implementations29 Sep 2021 Siru Ouyang, Zhuosheng Zhang, Hai Zhao

Pre-trained language models (PrLMs) have been shown useful for enhancing a broad range of natural language understanding (NLU) tasks.

Logical Reasoning Machine Reading Comprehension +3

Sparse Fuzzy Attention for Structured Sentiment Analysis

no code implementations14 Sep 2021 Letian Peng, Zuchao Li, Hai Zhao

Attention scorers have achieved success in parsing tasks like semantic and syntactic dependency parsing.

Dependency Parsing Sentiment Analysis

Enhanced Speaker-aware Multi-party Multi-turn Dialogue Comprehension

no code implementations9 Sep 2021 Xinbei Ma, Zhuosheng Zhang, Hai Zhao

Multi-party multi-turn dialogue comprehension brings unprecedented challenges on handling the complicated scenarios from multiple speakers and criss-crossed discourse relationship among speaker-aware utterances.

Question Answering

Self- and Pseudo-self-supervised Prediction of Speaker and Key-utterance for Multi-party Dialogue Reading Comprehension

1 code implementation Findings (EMNLP) 2021 Yiyang Li, Hai Zhao

Multi-party dialogue machine reading comprehension (MRC) brings tremendous challenge since it involves multiple speakers at one dialogue, resulting in intricate speaker information flows and noisy dialogue contexts.

Machine Reading Comprehension Question Answering

Unsupervised Open-Domain Question Answering

no code implementations31 Aug 2021 Pengfei Zhu, Xiaoguang Li, Jian Li, Hai Zhao

Open-domain Question Answering (ODQA) has achieved significant results in terms of supervised learning manner.

Machine Reading Comprehension Open-Domain Question Answering

Span Fine-tuning for Pre-trained Language Models

no code implementations Findings (EMNLP) 2021 Rongzhou Bao, Zhuosheng Zhang, Hai Zhao

Pre-trained language models (PrLM) have to carefully manage input units when training on a very large text with a vocabulary consisting of millions of words.

Smoothing Dialogue States for Open Conversational Machine Reading

1 code implementation EMNLP 2021 Zhuosheng Zhang, Siru Ouyang, Hai Zhao, Masao Utiyama, Eiichiro Sumita

In this work, we propose an effective gating strategy by smoothing the two dialogue states in only one decoder and bridge decision making and question generation to provide a richer dialogue state reference.

Decision Making Question Generation +2

Cross-lingual Transferring of Pre-trained Contextualized Language Models

no code implementations27 Jul 2021 Zuchao Li, Kevin Parnow, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita

Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant.

Language Modelling Machine Translation +1

Graph-free Multi-hop Reading Comprehension: A Select-to-Guide Strategy

no code implementations25 Jul 2021 Bohong Wu, Zhuosheng Zhang, Hai Zhao

Multi-hop reading comprehension (MHRC) requires not only to predict the correct answer span in the given passage, but also to provide a chain of supporting evidences for reasoning interpretability.

Multi-Hop Reading Comprehension

Dialogue-oriented Pre-training

1 code implementation Findings (ACL) 2021 Yi Xu, Hai Zhao

Pre-trained language models (PrLM) has been shown powerful in enhancing a broad range of downstream tasks including various dialogue related ones.

Language Modelling

Pre-training Universal Language Representation

no code implementations ACL 2021 Yian Li, Hai Zhao

Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific levels of linguistic units.

Question Answering Representation Learning

Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice

1 code implementation30 May 2021 Rongzhou Bao, Jiayi Wang, Hai Zhao

In detail, we design an auxiliary anomaly detection classifier and adopt a multi-task learning procedure, by which PrLMs are able to distinguish adversarial input samples.

Adversarial Attack Anomaly Detection +2

Grammatical Error Correction as GAN-like Sequence Labeling

no code implementations Findings (ACL) 2021 Kevin Parnow, Zuchao Li, Hai Zhao

In Grammatical Error Correction (GEC), sequence labeling models enjoy fast inference compared to sequence-to-sequence models; however, inference in sequence labeling GEC models is an iterative process, as sentences are passed to the model for multiple rounds of correction, which exposes the model to sentences with progressively fewer errors at each round.

Grammatical Error Correction

Structural Pre-training for Dialogue Comprehension

no code implementations ACL 2021 Zhuosheng Zhang, Hai Zhao

Pre-trained language models (PrLMs) have demonstrated superior performance due to their strong ability to learn universal language representations from self-supervised pre-training.

Fact-driven Logical Reasoning

1 code implementation NeurIPS 2021 Siru Ouyang, Zhuosheng Zhang, Hai Zhao

Therefore, we argue that the natural logic units would be the group of backbone constituents of the sentence such as the subject-verb-object formed "facts", covering both global and local knowledge pieces that are necessary as the basis for logical reasoning.

Logical Reasoning

Head-driven Phrase Structure Parsing in O($n^3$) Time Complexity

no code implementations20 May 2021 Zuchao Li, Junru Zhou, Hai Zhao, Kevin Parnow

Constituent and dependency parsing, the two classic forms of syntactic parsing, have been found to benefit from joint training and decoding under a uniform formalism, Head-driven Phrase Structure Grammar (HPSG).

Dependency Parsing

Neural Unsupervised Semantic Role Labeling

no code implementations19 Apr 2021 Kashif Munir, Hai Zhao, Zuchao Li

To decompose the task as two argument related subtasks, identification and clustering, we propose a pipeline that correspondingly consists of two neural modules.

Semantic Role Labeling

Not All Attention Is All You Need

no code implementations NeurIPS 2021 Hongqiu Wu, Hai Zhao, Min Zhang

Beyond the success story of pre-trained language models (PrLMs) in recent natural language processing, they are susceptible to over-fitting due to unusual large model size.

Document Classification Named Entity Recognition (NER) +1

Advances and Challenges in Unsupervised Neural Machine Translation

no code implementations EACL 2021 Rui Wang, Hai Zhao

Unsupervised cross-lingual language representation initialization methods, together with mechanisms such as denoising and back-translation, have advanced unsupervised neural machine translation (UNMT), which has achieved impressive results.

Denoising Machine Translation +1

Advances in Multi-turn Dialogue Comprehension: A Survey

no code implementations4 Mar 2021 Zhuosheng Zhang, Hai Zhao

In this paper, we review the previous methods from the technical perspective of dialogue modeling for the dialogue comprehension task.

Language Modelling Question Answering +1

Text Compression-aided Transformer Encoding

no code implementations11 Feb 2021 Zuchao Li, Zhuosheng Zhang, Hai Zhao, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita

In this paper, we propose explicit and implicit text compression approaches to enhance the Transformer encoding and evaluate models using this approach on several typical downstream tasks that rely on the encoding heavily.

Text Compression

Multi-turn Dialogue Reading Comprehension with Pivot Turns and Knowledge

no code implementations10 Feb 2021 Zhuosheng Zhang, Junlong Li, Hai Zhao

Experimental results on four dialogue comprehension benchmark tasks show that our proposed model achieves great improvements on baselines.

Reading Comprehension

To Understand Representation of Layer-aware Sequence Encoders as Multi-order-graph

no code implementations16 Jan 2021 Sufeng Duan, Hai Zhao

We also propose a revisited multigraph called Multi-order-Graph (MoG) based on our explanation to model the graph structures in the SAN-based model as subgraphs in MoG and convert the encoding of SAN-based model to the generation of MoG.

Machine Translation Translation

Cross-lingual Transfer Learning for Pre-trained Contextualized Language Models

no code implementations1 Jan 2021 Zuchao Li, Kevin Barry Parnow, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita

Though the pre-trained contextualized language model (PrLM) has made a significant impact on NLP, training PrLMs in languages other than English can be impractical for two reasons: other languages often lack corpora sufficient for training powerful PrLMs, and because of the commonalities among human languages, computationally expensive PrLM training for different languages is somewhat redundant.

Cross-Lingual Transfer Language Modelling +3

Switching-Aligned-Words Data Augmentation for Neural Machine Translation

no code implementations1 Jan 2021 Fengshun Xiao, Zuchao Li, Hai Zhao

In neural machine translation (NMT), data augmentation methods such as back-translation make it possible to use extra monolingual data to help improve translation performance, while it needs extra training data and the in-domain monolingual data is not always available.

Data Augmentation Machine Translation +2

Later Span Adaptation for Language Understanding

no code implementations1 Jan 2021 Rongzhou Bao, Zhuosheng Zhang, Hai Zhao

Instead of too early fixing the linguistic unit input as nearly all previous work did, we propose a novel method that combines span-level information into the representations generated by PrLMs during fine-tuning phase for better flexibility.

Natural Language Understanding

Efficient Neural Machine Translation with Prior Word Alignment

no code implementations1 Jan 2021 Jeonghyeok Park, Hai Zhao

In this paper, we propose a novel method that infuses prior word alignment information into neural machine translation (NMT) to provide hints or guidelines for the target sentence at running time.

Machine Translation NMT +2

Enhancing Pre-trained Language Model with Lexical Simplification

no code implementations30 Dec 2020 Rongzhou Bao, Jiayi Wang, Zhuosheng Zhang, Hai Zhao

By substituting complex words with simple alternatives, lexical simplification (LS) is a recognized method to reduce such lexical diversity, and therefore to improve the understandability of sentences.

General Classification Language Modelling +3

Code Summarization with Structure-induced Transformer

1 code implementation Findings (ACL) 2021 Hongqiu Wu, Hai Zhao, Min Zhang

Code summarization (CS) is becoming a promising area in recent language understanding, which aims to generate sensible human language automatically for programming language in the format of source code, serving in the most convenience of programmer developing.

Code Summarization Natural Language Understanding +1

BURT: BERT-inspired Universal Representation from Learning Meaningful Segment

no code implementations28 Dec 2020 Yian Li, Hai Zhao

We present a universal representation model, BURT (BERT-inspired Universal Representation from learning meaningful segmenT), to encode different levels of linguistic unit into the same vector space.

Information Retrieval Question Answering +4

SG-Net: Syntax Guided Transformer for Language Representation

no code implementations27 Dec 2020 Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang

In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention.

Machine Reading Comprehension Machine Translation +2

Adaptive Convolution for Semantic Role Labeling

no code implementations27 Dec 2020 Kashif Munir, Hai Zhao, Zuchao Li

Semantic role labeling (SRL) aims at elaborating the meaning of a sentence by forming a predicate-argument structure.

Semantic Role Labeling

Cross-lingual Universal Dependency Parsing Only from One Monolingual Treebank

no code implementations24 Dec 2020 Kailai Sun, Zuchao Li, Hai Zhao

As it is unlikely to obtain a treebank for every human language, in this work, we propose an effective cross-lingual UD parsing framework for transferring parser from only one source monolingual treebank to any other target languages without treebank available.

Cross-Lingual Transfer Dependency Parsing +3

Reference Knowledgeable Network for Machine Reading Comprehension

1 code implementation7 Dec 2020 Yilin Zhao, Zhuosheng Zhang, Hai Zhao

Thus we propose a novel reference-based knowledge enhancement model called Reference Knowledgeable Network (RekNet), which simulates human reading strategies to refine critical information from the passage and quote explicit knowledge in necessity.

Machine Reading Comprehension Multi-Choice MRC

LIMIT-BERT : Linguistics Informed Multi-Task BERT

1 code implementation Findings of the Association for Computational Linguistics 2020 Junru Zhou, Zhuosheng Zhang, Hai Zhao, Shuailiang Zhang

Besides, LIMIT-BERT takes a semi-supervised learning strategy to offer the same large amount of linguistics task data as that for the language model training.

Language Modelling Multi-Task Learning +3

Topic-Aware Multi-turn Dialogue Modeling

1 code implementation26 Sep 2020 Yi Xu, Hai Zhao, Zhuosheng Zhang

In the retrieval-based multi-turn dialogue modeling, it remains a challenge to select the most appropriate response according to extracting salient features in context utterances.

Retrieval

Document-level Neural Machine Translation with Document Embeddings

no code implementations16 Sep 2020 Shu Jiang, Hai Zhao, Zuchao Li, Bao-liang Lu

Standard neural machine translation (NMT) is on the assumption of document-level context independent.

Machine Translation NMT +1

Graph-to-Sequence Neural Machine Translation

no code implementations16 Sep 2020 Sufeng Duan, Hai Zhao, Rui Wang

In the light of the current NMT models more or less capture graph information among the sequence in a latent way, we present a graph-to-sequence model facilitating explicit graph information capturing.

Graph-to-Sequence Machine Translation +2

Filling the Gap of Utterance-aware and Speaker-aware Representation for Multi-turn Dialogue

1 code implementation14 Sep 2020 Longxiang Liu, Zhuosheng Zhang, Hai Zhao, Xi Zhou, Xiang Zhou

A multi-turn dialogue is composed of multiple utterances from two or more different speaker roles.

Retrieval

Composing Answer from Multi-spans for Reading Comprehension

no code implementations14 Sep 2020 Zhuosheng Zhang, Yiqing Zhang, Hai Zhao, Xi Zhou, Xiang Zhou

This paper presents a novel method to generate answers for non-extraction machine reading comprehension (MRC) tasks whose answers cannot be simply extracted as one span from the given passages.

Machine Reading Comprehension

Syntax Role for Neural Semantic Role Labeling

no code implementations CL (ACL) 2021 Zuchao Li, Hai Zhao, Shexia He, Jiaxun Cai

Semantic role labeling (SRL) is dedicated to recognizing the semantic predicate-argument structure of a sentence.

Semantic Role Labeling

Learning Universal Representations from Word to Sentence

no code implementations10 Sep 2020 Yian Li, Hai Zhao

Despite the well-developed cut-edge representation learning for language, most language representation models usually focus on specific level of linguistic unit, which cause great inconvenience when being confronted with handling multiple layers of linguistic objects in a unified way.

Representation Learning

Dialogue-adaptive Language Model Pre-training From Quality Estimation

1 code implementation10 Sep 2020 Junlong Li, Zhuosheng Zhang, Hai Zhao

Pre-trained language models (PrLMs) have achieved great success on a wide range of natural language processing tasks by virtue of the universal language representation ability obtained by self-supervised learning on a large corpus.

Informativeness Language Modelling +2

Machine Reading Comprehension: The Role of Contextualized Language Models and Beyond

1 code implementation13 May 2020 Zhuosheng Zhang, Hai Zhao, Rui Wang

In this survey, we provide a comprehensive and comparative review on MRC covering overall research topics about 1) the origin and development of MRC and CLM, with a particular focus on the role of CLMs; 2) the impact of MRC and CLM to the NLP community; 3) the definition, datasets, and evaluation of MRC; 4) general MRC architecture and technical methods in the view of two-stage Encoder-Decoder solving architecture from the insights of the cognitive process of humans; 5) previous highlights, emerging topics, and our empirical analysis, among which we especially focus on what works in different periods of MRC researches.

Machine Reading Comprehension Text Matching

Bipartite Flat-Graph Network for Nested Named Entity Recognition

1 code implementation ACL 2020 Ying Luo, Hai Zhao

In this paper, we propose a novel bipartite flat-graph network (BiFlaG) for nested named entity recognition (NER), which contains two subgraph modules: a flat NER module for outermost entities and a graph module for all the entities located in inner layers.

named-entity-recognition Named Entity Recognition +3

Neural Machine Translation with Universal Visual Representation

1 code implementation ICLR 2020 Zhuosheng Zhang, Kehai Chen, Rui Wang, Masao Utiyama, Eiichiro Sumita, Zuchao Li, Hai Zhao

Though visual information has been introduced for enhancing neural machine translation (NMT), its effectiveness strongly relies on the availability of large amounts of bilingual parallel sentence pairs with manual image annotations.

Machine Translation NMT +1

Data-dependent Gaussian Prior Objective for Language Generation

no code implementations ICLR 2020 Zuchao Li, Rui Wang, Kehai Chen, Masso Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao

However, MLE focuses on once-to-all matching between the predicted sequence and gold-standard, consequently treating all incorrect predictions as being equally incorrect.

Image Captioning L2 Regularization +4

Capsule-Transformer for Neural Machine Translation

no code implementations30 Apr 2020 Sufeng Duan, Juncheng Cao, Hai Zhao

In this paper, we thus propose the capsule-Transformer, which extends the linear transformation into a more general capsule routing algorithm by taking SAN as a special case of capsule network.

Machine Translation Translation

Syntax-aware Data Augmentation for Neural Machine Translation

no code implementations29 Apr 2020 Sufeng Duan, Hai Zhao, Dong-dong Zhang, Rui Wang

Data augmentation is an effective performance enhancement in neural machine translation (NMT) by generating additional bilingual data.

Data Augmentation Machine Translation +2

Knowledgeable Dialogue Reading Comprehension on Key Turns

no code implementations29 Apr 2020 Junlong Li, Zhuosheng Zhang, Hai Zhao

In this paper, the relevance of each turn to the question are calculated to choose key turns.

Answer Selection Language Modelling +1

BURT: BERT-inspired Universal Representation from Twin Structure

no code implementations29 Apr 2020 Yian Li, Hai Zhao

Pre-trained contextualized language models such as BERT have shown great effectiveness in a wide range of downstream Natural Language Processing (NLP) tasks.

Natural Language Inference STS +2

Semantics-Aware Inferential Network for Natural Language Understanding

no code implementations28 Apr 2020 Shuailiang Zhang, Hai Zhao, Junru Zhou

Taking explicit contextualized semantics as a complementary input, the inferential module of SAIN enables a series of reasoning steps over semantic clues through an attention mechanism.

Machine Reading Comprehension Natural Language Inference +1

Reference Language based Unsupervised Neural Machine Translation

1 code implementation Findings of the Association for Computational Linguistics 2020 Zuchao Li, Hai Zhao, Rui Wang, Masao Utiyama, Eiichiro Sumita

Further enriching the idea of pivot translation by extending the use of parallel corpora beyond the source-target paradigm, we propose a new reference language-based framework for UNMT, RUNMT, in which the reference language only shares a parallel corpus with the source, but this corpus still indicates a signal clear enough to help the reconstruction training of UNMT through a proposed reference agreement mechanism.

Machine Translation Translation

Retrospective Reader for Machine Reading Comprehension

2 code implementations27 Jan 2020 Zhuosheng Zhang, Junjie Yang, Hai Zhao

Inspired by how humans solve reading comprehension questions, we proposed a retrospective reader (Retro-Reader) that integrates two stages of reading and verification strategies: 1) sketchy reading that briefly investigates the overall interactions of passage and question, and yield an initial judgment; 2) intensive reading that verifies the answer and gives the final prediction.

Machine Reading Comprehension Question Answering

DUMA: Reading Comprehension with Transposition Thinking

3 code implementations26 Jan 2020 Pengfei Zhu, Hai Zhao, Xiaoguang Li

Multi-choice Machine Reading Comprehension (MRC) requires model to decide the correct answer from a set of answer options when given a passage and a question.

Language Modelling Machine Reading Comprehension +1

Dual Multi-head Co-attention for Multi-choice Reading Comprehension

no code implementations1 Jan 2020 Pengfei Zhu, Hai Zhao, Xiaoguang Li

Multi-choice Machine Reading Comprehension (MRC) requires model to decide the correct answer from a set of answer options when given a passage and a question.

Language Modelling Machine Reading Comprehension +1

Explicit Sentence Compression for Neural Machine Translation

1 code implementation27 Dec 2019 Zuchao Li, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Zhuosheng Zhang, Hai Zhao

In this paper, we propose an explicit sentence compression method to enhance the source sentence representation for NMT.

Machine Translation NMT +2

Korean-to-Chinese Machine Translation using Chinese Character as Pivot Clue

2 code implementations25 Nov 2019 Jeonghyeok Park, Hai Zhao

Korean-Chinese is a low resource language pair, but Korean and Chinese have a lot in common in terms of vocabulary.

Machine Translation Translation

Global Greedy Dependency Parsing

1 code implementation20 Nov 2019 Zuchao Li, Hai Zhao, Kevin Parnow

Most syntactic dependency parsing models may fall into one of two categories: transition- and graph-based models.

Dependency Parsing Re-Ranking

Dependency and Span, Cross-Style Semantic Role Labeling on PropBank and NomBank

no code implementations7 Nov 2019 Zuchao Li, Hai Zhao, Junru Zhou, Kevin Parnow, Shexia He

In this paper, we define a new cross-style semantic role label convention and propose a new cross-style joint optimization model designed around the most basic linguistic meaning of a semantic role, providing a solution to make the results of the two styles more comparable and allowing both formalisms of SRL to benefit from their natural connections in both linguistics and computation.

Semantic Role Labeling

Probing Contextualized Sentence Representations with Visual Awareness

no code implementations7 Nov 2019 Zhuosheng Zhang, Rui Wang, Kehai Chen, Masao Utiyama, Eiichiro Sumita, Hai Zhao

We present a universal framework to model contextualized sentence representations with visual awareness that is motivated to overcome the shortcomings of the multimodal parallel data with manual annotations.

Machine Translation Natural Language Inference +1

Hierarchical Contextualized Representation for Named Entity Recognition

1 code implementation6 Nov 2019 Ying Luo, Fengshun Xiao, Hai Zhao

In this paper, we address these two deficiencies and propose a model augmented with hierarchical contextualized representation: sentence-level representation and document-level representation.

Ranked #12 on Named Entity Recognition (NER) on Ontonotes v5 (English) (using extra training data)

named-entity-recognition Named Entity Recognition +1

Deepening Hidden Representations from Pre-trained Language Models

no code implementations5 Nov 2019 Junjie Yang, Hai Zhao

Transformer-based pre-trained language models have proven to be effective for learning contextualized language representation.

Natural Language Understanding

SJTU-NICT at MRP 2019: Multi-Task Learning for End-to-End Uniform Semantic Graph Parsing

no code implementations CONLL 2019 Zuchao Li, Hai Zhao, Zhuosheng Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita

This paper describes our SJTU-NICT{'}s system for participating in the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL).

Multi-Task Learning

SJTU at MRP 2019: A Transition-Based Multi-Task Parser for Cross-Framework Meaning Representation Parsing

no code implementations CONLL 2019 Hongxiao Bai, Hai Zhao

This paper describes the system of our team SJTU for our participation in the CoNLL 2019 Shared Task: Cross-Framework Meaning Representation Parsing.

Attention Is All You Need for Chinese Word Segmentation

1 code implementation EMNLP 2020 Sufeng Duan, Hai Zhao

Taking greedy decoding algorithm as it should be, this work focuses on further strengthening the model itself for Chinese word segmentation (CWS), which results in an even more fast and more accurate CWS model.

Chinese Word Segmentation

Document-level Neural Machine Translation with Associated Memory Network

no code implementations31 Oct 2019 Shu Jiang, Rui Wang, Zuchao Li, Masao Utiyama, Kehai Chen, Eiichiro Sumita, Hai Zhao, Bao-liang Lu

Most existing document-level NMT approaches are satisfied with a smattering sense of global document-level information, while this work focuses on exploiting detailed document-level context in terms of a memory network.

Machine Translation NMT +1

LIMIT-BERT : Linguistic Informed Multi-Task BERT

no code implementations31 Oct 2019 Junru Zhou, Zhuosheng Zhang, Hai Zhao, Shuailiang Zhang

In this paper, we present a Linguistic Informed Multi-Task BERT (LIMIT-BERT) for learning language representations across multiple linguistic tasks by Multi-Task Learning (MTL).

Multi-Task Learning POS +2

Subword ELMo

no code implementations18 Sep 2019 Jiangtong Li, Hai Zhao, Zuchao Li, Wei Bi, Xiaojiang Liu

Embedding from Language Models (ELMo) has shown to be effective for improving many natural language processing (NLP) tasks, and ELMo takes character information to compose word representation to train language models. However, the character is an insufficient and unnatural linguistic unit for word representation. Thus we introduce Embedding from Subword-aware Language Models (ESuLMo) which learns word representation from subwords using unsupervised segmentation over words. We show that ESuLMo can enhance four benchmark NLP tasks more effectively than ELMo, including syntactic dependency parsing, semantic role labeling, implicit discourse relation recognition and textual entailment, which brings a meaningful improvement over ELMo.

Dependency Parsing Natural Language Inference +1

Semantics-aware BERT for Language Understanding

1 code implementation5 Sep 2019 Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao Li, Shuailiang Zhang, Xi Zhou, Xiang Zhou

The latest work on language representations carefully integrates contextualized features into language model training, which enables a series of success especially in various machine reading comprehension and natural language inference tasks.

Language Modelling Machine Reading Comprehension +5

Modeling Named Entity Embedding Distribution into Hypersphere

no code implementations3 Sep 2019 Zhuosheng Zhang, Bingjie Tang, Zuchao Li, Hai Zhao

This work models named entity distribution from a way of visualizing topological structure of embedding space, so that we make an assumption that most, if not all, named entities (NEs) for a language tend to aggregate together to be accommodated by a specific hypersphere in embedding space.

named-entity-recognition Named Entity Recognition +1

A Smart Sliding Chinese Pinyin Input Method Editor on Touchscreen

no code implementations3 Sep 2019 Zhuosheng Zhang, Zhen Meng, Hai Zhao

This paper presents a smart sliding Chinese pinyin Input Method Editor (IME) for touchscreen devices which allows user finger sliding from one key to another on the touchscreen instead of tapping keys one by one, while the target Chinese character sequence will be predicted during the sliding process to help user input Chinese characters efficiently.

Syntax-aware Multilingual Semantic Role Labeling

1 code implementation IJCNLP 2019 Shexia He, Zuchao Li, Hai Zhao

Recently, semantic role labeling (SRL) has earned a series of success with even higher performance improvements, which can be mainly attributed to syntactic integration and enhanced word representation.

Semantic Role Labeling

Named Entity Recognition Only from Word Embeddings

no code implementations EMNLP 2020 Ying Luo, Hai Zhao, Junlang Zhan

Deep neural network models have helped named entity (NE) recognition achieve amazing performance without handcrafting features.

named-entity-recognition Named Entity Recognition +3

Open Named Entity Modeling from Embedding Distribution

no code implementations31 Aug 2019 Ying Luo, Hai Zhao, Zhuosheng Zhang, Bingjie Tang

For monolingual cases, the proposed named entity model gives an open description of diverse named entity types and different languages.

named-entity-recognition Named Entity Recognition +2

DCMN+: Dual Co-Matching Network for Multi-choice Reading Comprehension

2 code implementations30 Aug 2019 Shuailiang Zhang, Hai Zhao, Yuwei Wu, Zhuosheng Zhang, Xi Zhou, Xiang Zhou

Multi-choice reading comprehension is a challenging task to select an answer from a set of candidate options when given passage and question.

Reading Comprehension

Parsing All: Syntax and Semantics, Dependencies and Spans

1 code implementation Findings of the Association for Computational Linguistics 2020 Junru Zhou, Zuchao Li, Hai Zhao

Both syntactic and semantic structures are key linguistic contextual clues, in which parsing the latter has been well shown beneficial from parsing the former.

Semantic Parsing

Memorizing All for Implicit Discourse Relation Recognition

no code implementations29 Aug 2019 Hongxiao Bai, Hai Zhao, Junhan Zhao

As implicit discourse relation recognizer has to carefully tackle the semantic similarity of the given sentence pairs and the severe data sparsity issue exists in the meantime, it is supposed to be beneficial from mastering the entire training data.

Semantic Similarity Semantic Textual Similarity

Controllable Dual Skew Divergence Loss for Neural Machine Translation

no code implementations22 Aug 2019 Zuchao Li, Hai Zhao, Yingting Wu, Fengshun Xiao, Shu Jiang

Our experiments indicate that switching to the DSD loss after the convergence of ML training helps models escape local optima and stimulates stable performance improvements.

Machine Translation NMT +1

Concurrent Parsing of Constituency and Dependency

no code implementations18 Aug 2019 Junru Zhou, Shuailiang Zhang, Hai Zhao

Constituent and dependency representation for syntactic structure share a lot of linguistic and computational characteristics, this paper thus makes the first attempt by introducing a new model that is capable of parsing constituent and dependency at the same time, so that lets either of the parsers enhance each other.

Dependency Parsing

SG-Net: Syntax-Guided Machine Reading Comprehension

1 code implementation14 Aug 2019 Zhuosheng Zhang, Yuwei Wu, Junru Zhou, Sufeng Duan, Hai Zhao, Rui Wang

In detail, for self-attention network (SAN) sponsored Transformer-based encoder, we introduce syntactic dependency of interest (SDOI) design into the SAN to form an SDOI-SAN with syntax-guided self-attention.

Language Modelling Machine Reading Comprehension +1

Semantic Role Labeling with Associated Memory Network

1 code implementation NAACL 2019 Chaoyu Guan, Yuhao Cheng, Hai Zhao

Semantic role labeling (SRL) is a task to recognize all the predicate-argument pairs of a sentence, which has been in a performance improvement bottleneck after a series of latest works were presented.

Semantic Role Labeling

Lattice-Based Transformer Encoder for Neural Machine Translation

no code implementations ACL 2019 Fengshun Xiao, Jiangtong Li, Hai Zhao, Rui Wang, Kehai Chen

To integrate different segmentations with the state-of-the-art NMT model, Transformer, we propose lattice-based encoders to explore effective word or subword representation in an automatic way during training.

Machine Translation NMT +1

GAN Driven Semi-distant Supervision for Relation Extraction

no code implementations NAACL 2019 Pengshuai Li, Xinsong Zhang, Weijia Jia, Hai Zhao

Distant supervision has been widely used in relation extraction tasks without hand-labeled datasets recently.

Relation Extraction

Judging Chemical Reaction Practicality From Positive Sample only Learning

no code implementations22 Apr 2019 Shu Jiang, Zhuosheng Zhang, Hai Zhao, Jiangtong Li, Yang Yang, Bao-liang Lu, Ning Xia

Chemical reaction practicality is the core task among all symbol intelligence based chemical information processing, for example, it provides indispensable clue for further automatic synthesis route inference.

Span Model for Open Information Extraction on Accurate Corpus

1 code implementation30 Jan 2019 Junlang Zhan, Hai Zhao

Open information extraction (Open IE) is a challenging task especially due to its brittle data basis.

Open Information Extraction

Chemical Names Standardization using Neural Sequence to Sequence Model

1 code implementation ICLR 2019 Junlang Zhan, Hai Zhao

Chemical information extraction is to convert chemical knowledge in text into true chemical database, which is a text processing task heavily relying on chemical compound name identification and standardization.

Chinese Word Segmentation: Another Decade Review (2007-2017)

no code implementations18 Jan 2019 Hai Zhao, Deng Cai, Changning Huang, Chunyu Kit

This paper reviews the development of Chinese word segmentation (CWS) in the most recent decade, 2007-2017.

Chinese Word Segmentation

Open Vocabulary Learning for Neural Chinese Pinyin IME

1 code implementation ACL 2019 Zhuosheng Zhang, Yafang Huang, Hai Zhao

Pinyin-to-character (P2C) conversion is the core component of pinyin-based Chinese input method engine (IME).

Fast Neural Chinese Word Segmentation for Long Sentences

no code implementations6 Nov 2018 Sufeng Duan, Jiangtong Li, Hai Zhao

Rapidly developed neural models have achieved competitive performance in Chinese word segmentation (CWS) as their traditional counterparts.

Chinese Word Segmentation TAG

Joint Learning of POS and Dependencies for Multilingual Universal Dependency Parsing

1 code implementation CONLL 2018 Zuchao Li, Shexia He, Zhuosheng Zhang, Hai Zhao

This paper describes the system of team LeisureX in the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.

Lemmatization Part-Of-Speech Tagging +3

Multilingual Universal Dependency Parsing from Raw Text with Low-Resource Language Enhancement

no code implementations CONLL 2018 Yingting Wu, Hai Zhao, Jia-Jun Tong

This paper describes the system of our team Phoenix for participating CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies.

Dependency Parsing Part-Of-Speech Tagging

Attentive Semantic Role Labeling with Boundary Indicator

no code implementations8 Sep 2018 Zhuosheng Zhang, Shexia He, Zuchao Li, Hai Zhao

The goal of semantic role labeling (SRL) is to discover the predicate-argument structure of a sentence, which plays a critical role in deep processing of natural language.

Semantic Role Labeling

Explicit Contextual Semantics for Text Comprehension

no code implementations8 Sep 2018 Zhuosheng Zhang, Yuwei Wu, Zuchao Li, Hai Zhao

Who did what to whom is a major focus in natural language understanding, which is right the aim of semantic role labeling (SRL) task.

Machine Reading Comprehension Natural Language Understanding +1

Chinese Pinyin Aided IME, Input What You Have Not Keystroked Yet

1 code implementation EMNLP 2018 Yafang Huang, Hai Zhao

Chinese pinyin input method engine (IME) converts pinyin into character so that Chinese characters can be conveniently inputted into computer through common keyboard.

Exploring Recombination for Efficient Decoding of Neural Machine Translation

1 code implementation EMNLP 2018 Zhisong Zhang, Rui Wang, Masao Utiyama, Eiichiro Sumita, Hai Zhao

In Neural Machine Translation (NMT), the decoder can capture the features of the entire prediction history with neural connections and representations.

Machine Translation NMT +1

A Full End-to-End Semantic Role Labeler, Syntax-agnostic Over Syntax-aware?

1 code implementation11 Aug 2018 Jiaxun Cai, Shexia He, Zuchao Li, Hai Zhao

Semantic role labeling (SRL) is to recognize the predicate-argument structure of a sentence, including subtasks of predicate disambiguation and argument labeling.

Semantic Role Labeling

Seq2seq Dependency Parsing

1 code implementation COLING 2018 Zuchao Li, Jiaxun Cai, Shexia He, Hai Zhao

This paper presents a sequence to sequence (seq2seq) dependency parser by directly predicting the relative position of head for each given word, which therefore results in a truly end-to-end seq2seq dependency parser for the first time.

Dependency Parsing Feature Engineering

A Full End-to-End Semantic Role Labeler, Syntactic-agnostic Over Syntactic-aware?

no code implementations COLING 2018 Jiaxun Cai, Shexia He, Zuchao Li, Hai Zhao

Semantic role labeling (SRL) is to recognize the predicate-argument structure of a sentence, including subtasks of predicate disambiguation and argument labeling.

Machine Translation Question Answering +2

Finding Better Subword Segmentation for Neural Machine Translation

1 code implementation25 Jul 2018 Yingting Wu, Hai Zhao

For different language pairs, word-level neural machine translation (NMT) models with a fixed-size vocabulary suffer from the same problem of representing out-of-vocabulary (OOV) words.

Machine Translation NMT +1

Deep Enhanced Representation for Implicit Discourse Relation Recognition

1 code implementation COLING 2018 Hongxiao Bai, Hai Zhao

Implicit discourse relation recognition is a challenging task as the relation prediction without explicit connectives in discourse parsing needs understanding of text spans and cannot be easily derived from surface features from the input sentence pairs.

Discourse Parsing

Moon IME: Neural-based Chinese Pinyin Aided Input Method with Customizable Association

no code implementations ACL 2018 Yafang Huang, Zuchao Li, Zhuosheng Zhang, Hai Zhao

Chinese pinyin input method engine (IME) lets user conveniently input Chinese into a computer by typing pinyin through the common keyboard.

Association Information Retrieval +4

One-shot Learning for Question-Answering in Gaokao History Challenge

1 code implementation COLING 2018 Zhuosheng Zhang, Hai Zhao

Answering questions from university admission exams (Gaokao in Chinese) is a challenging AI task since it requires effective representation to capture complicated semantic relations between questions and answers.

One-Shot Learning Question Answering

Modeling Multi-turn Conversation with Deep Utterance Aggregation

1 code implementation COLING 2018 Zhuosheng Zhang, Jiangtong Li, Pengfei Zhu, Hai Zhao, Gongshen Liu

In this paper, we formulate previous utterances into context using a proposed deep utterance aggregation model to form a fine-grained context representation.

Conversational Response Selection Retrieval

SJTU-NLP at SemEval-2018 Task 9: Neural Hypernym Discovery with Term Embeddings

no code implementations SEMEVAL 2018 Zhuosheng Zhang, Jiangtong Li, Hai Zhao, Bingjie Tang

This paper describes a hypernym discovery system for our participation in the SemEval-2018 Task 9, which aims to discover the best (set of) candidate hypernyms for input concepts or entities, given the search space of a pre-defined vocabulary.

Hypernym Discovery