Multi-document summarization (MDS) aims at producing a good-quality summary for several related documents.
Recent pretrained language models extend from millions to billions of parameters.
However, we find they suffer from trigger biases that signify the statistical homogeneity between some trigger words and target event types, which we summarize as trigger overlapping and trigger separability.
In this paper, we divide ASTE into target-opinion joint detection and sentiment classification subtasks, which is in line with human cognition, and correspondingly propose sequence encoder and table encoder.
1 code implementation • 15 Jun 2021 • Ningyu Zhang, Mosha Chen, Zhen Bi, Xiaozhuan Liang, Lei LI, Xin Shang, Kangping Yin, Chuanqi Tan, Jian Xu, Fei Huang, Luo Si, Yuan Ni, Guotong Xie, Zhifang Sui, Baobao Chang, Hui Zong, Zheng Yuan, Linfeng Li, Jun Yan, Hongying Zan, Kunli Zhang, Buzhou Tang, Qingcai Chen
Artificial Intelligence (AI), along with the recent progress in biomedical language understanding, is gradually changing medical practice.
Ranked #1 on Named Entity Recognition on CMeEE
However, not all entity pairs can be connected with a path and have the correct logical reasoning paths in their graph.
Ranked #8 on Relation Extraction on DocRED
In this paper, we tackle the task of Definition Generation (DG) in Chinese, which aims at automatically generating a definition for a word.
Existing methods are not effective due to two challenges of this task: a) the target event arguments are scattered across sentences; b) the correlation among events in a document is non-trivial to model.
Starting from the concept, com-position, development and meaning of natural language evaluation, this article classifies and summarizes the tasks and char-acteristics of mainstream natural language evaluation, and then summarizes the problems and causes of natural language pro-cessing evaluation.
Conventional Machine Reading Comprehension (MRC) has been well-addressed by pattern matching, but the ability of commonsense reasoning remains a gap between humans and machines.
In open domain table-to-text generation, we notice that the unfaithful generation usually contains hallucinated content which can not be aligned to any input table record.
Recent years have seen significant advancement in text generation tasks with the help of neural language models.
In classification, we combine the entity representations from both two levels into more comprehensive representations for relation extraction.
Ranked #16 on Relation Extraction on DocRED
The widespread adoption of reference-based automatic evaluation metrics such as ROUGE has promoted the development of document summarization.
We explore training objectives for discriminative fine-tuning of our generative classifiers, showing improvements over log loss fine-tuning from prior work .
The prior work on natural language inference (NLI) debiasing mainly targets at one or few known biases while not necessarily making the models more robust.
Document-level relation extraction aims to extract relations among entities within a document.
Ranked #5 on Relation Extraction on DocRED
Conventional Knowledge Graph Completion (KGC) assumes that all test entities appear during training.
Many recent studies have shown that for models trained on datasets for natural language inference (NLI), it is possible to make correct predictions by merely looking at the hypothesis while completely ignoring the premise.
It consists of a generator to produce pun sentences, and a discriminator to distinguish between the generated pun sentences and the real sentences with specific word senses.
Therefore, we propose a generic and novel framework which consists of a sentiment analyzer and a sentimental generator, respectively addressing the two challenges.
To relieve these problems, we first propose force attention (FA) method to encourage the generator to pay more attention to the uncovered attributes to avoid potential key attributes missing.
In this paper, we propose a soft label approach to target-level sentiment classification task, in which a history-based soft labeling model is proposed to measure the possibility of a context word as an opinion word.
Therefore, in this paper, we propose a dual reinforcement learning framework to directly transfer the style of the text via a one-step mapping model, without any separation of content and style.
Ranked #1 on Unsupervised Text Style Transfer on Yelp
This paper proposes to study fine-grained coordinated cross-lingual text stream alignment through a novel information network decipherment paradigm.
The goal of Word Sense Disambiguation (WSD) is to identify the correct meaning of a word in the particular context.
Pre-trained word embeddings and language model have been shown useful in a lot of tasks.
GAS models the semantic relationship between the context and the gloss in an improved memory network framework, which breaks the barriers of the previous supervised methods and knowledge-based methods.
Ranked #3 on Word Sense Disambiguation on SemEval 2015 Task 13
In the decoding phase, dual attention mechanism which contains word level attention and field level attention is proposed to model the semantic relevance between the generated description and the table.
Ranked #1 on Table-to-Text Generation on WikiBio
Distant-supervised relation extraction inevitably suffers from wrong labeling problems because it heuristically labels relational facts with knowledge bases.
Generating texts from structured data (e. g., a table) is important for various natural language processing tasks such as question answering and dialog systems.
Multi-document summarization provides users with a short text that summarizes the information in a set of related documents.
In Semantic Role Labeling (SRL) task, the tree structured dependency relation is rich in syntax information, but it is not well handled by existing models.
We first match the question and passage with gated attention-based recurrent networks to obtain the question-aware passage representation.
Ranked #27 on Question Answering on SQuAD1.1 dev
As for semantic role labeling (SRL) task, when it comes to utilizing parsing information, both traditional methods and recent recurrent neural network (RNN) based methods use the feature engineering way.
Previous studies on Chinese semantic role labeling (SRL) have concentrated on single semantically annotated corpus.
After read the premise again, the model can get a better understanding of the premise, which can also affect the understanding of the hypothesis.
Ranked #38 on Natural Language Inference on SNLI
In this paper, we present a novel time-aware knowledge graph completion model that is able to predict links in a KG using both the existing facts and the temporal information of the facts.
We introduce a novel Burst Information Network (BINet) representation that can display the most important information and illustrate the connections among bursty entities, events and keywords in the corpus.
Automatic event schema induction (AESI) means to extract meta-event from raw text, in other words, to find out what types (templates) of event may exist in the raw text and what roles (slots) may exist in each event type.