Entity-Relation Extraction as Multi-Turn Question Answering

In this paper, we propose a new paradigm for the task of entity-relation extraction. We cast the task as a multi-turn question answering problem, i.e., the extraction of entities and relations is transformed to the task of identifying answer spans from the context. This multi-turn QA formalization comes with several key advantages: firstly, the question query encodes important information for the entity/relation class we want to identify; secondly, QA provides a natural way of jointly modeling entity and relation; and thirdly, it allows us to exploit the well developed machine reading comprehension (MRC) models. Experiments on the ACE and the CoNLL04 corpora demonstrate that the proposed paradigm significantly outperforms previous best models. We are able to obtain the state-of-the-art results on all of the ACE04, ACE05 and CoNLL04 datasets, increasing the SOTA results on the three datasets to 49.4 (+1.0), 60.2 (+0.6) and 68.9 (+2.1), respectively. Additionally, we construct a newly developed dataset RESUME in Chinese, which requires multi-step reasoning to construct entity dependencies, as opposed to the single-step dependency extraction in the triplet exaction in previous datasets. The proposed multi-turn QA model also achieves the best performance on the RESUME dataset.

PDF Abstract ACL 2019 PDF ACL 2019 Abstract

Results from the Paper


 Ranked #1 on Relation Extraction on ACE 2005 (Sentence Encoder metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Relation Extraction ACE 2004 Multi-turn QA NER Micro F1 83.6 # 6
RE+ Micro F1 49.4 # 5
Cross Sentence No # 1
Relation Extraction ACE 2005 Multi-turn QA NER Micro F1 84.8 # 14
RE+ Micro F1 60.2 # 9
Sentence Encoder BERT base # 1
Cross Sentence No # 1
Relation Extraction CoNLL04 Multi-turn QA RE+ Micro F1 68.9 # 9
NER Micro F1 87.8 # 7

Methods


No methods listed for this paper. Add relevant methods here