A Frustratingly Easy Approach for Entity and Relation Extraction

NAACL 2021  ยท  Zexuan Zhong, Danqi Chen ยท

End-to-end relation extraction aims to identify named entities and extract relations between them. Most recent work models these two subtasks jointly, either by casting them in one structured prediction framework, or performing multi-task learning through shared representations. In this work, we present a simple pipelined approach for entity and relation extraction, and establish the new state-of-the-art on standard benchmarks (ACE04, ACE05 and SciERC), obtaining a 1.7%-2.8% absolute improvement in relation F1 over previous joint models with the same pre-trained encoders. Our approach essentially builds on two independent encoders and merely uses the entity model to construct the input for the relation model. Through a series of careful examinations, we validate the importance of learning distinct contextual representations for entities and relations, fusing entity information early in the relation model, and incorporating global context. Finally, we also present an efficient approximation to our approach which requires only one pass of both entity and relation encoders at inference time, achieving an 8-16$\times$ speedup with a slight reduction in accuracy.

PDF Abstract NAACL 2021 PDF NAACL 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Relation Extraction ACE 2004 Ours: cross-sentence ALB RE Micro F1 66.1 # 2
NER Micro F1 90.3 # 2
RE+ Micro F1 62.2 # 3
Cross Sentence Yes # 1
Named Entity Recognition (NER) ACE 2004 Ours: cross-sentence ALB F1 90.3 # 1
Multi-Task Supervision y # 1
Joint Entity and Relation Extraction ACE 2005 Ours: cross-sentence ALB Relation F1 62.2 # 1
Relation Extraction ACE 2005 Ours: cross-sentence ALB RE Micro F1 69.4 # 4
NER Micro F1 90.9 # 3
Sentence Encoder ALBERT # 1
Cross Sentence Yes # 1
Named Entity Recognition (NER) ACE 2005 Ours: cross-sentence ALB F1 90.9 # 1
Joint Entity and Relation Extraction SciERC Ours: cross-sentence Entity F1 68.9 # 6
Relation F1 50.1 # 5
RE+ Micro F1 36.7 # 1
Cross Sentence Yes # 1
Named Entity Recognition (NER) SciERC Ours: cross-sentence F1 68.2 # 4

Methods


No methods listed for this paper. Add relevant methods here