ITER: Iterative Transformer-based Entity Recognition and Relation Extraction

When extracting structured information from text, recognizing entities and extracting relationships are essential. Recent advances in both tasks generate a structured representation of the information in an autoregressive manner, a time-consuming and computationally expensive approach. This naturally raises the question of whether autoregressive methods are necessary in order to achieve comparable results. In this work, we propose ITER, an efficient encoder-based relation extraction model, that performs the task in three parallelizable steps, greatly accelerating a recent language modeling approach: ITER achieves an inference throughput of over 600 samples per second for a large model on a single consumer-grade GPU. Furthermore, we achieve state-of-the-art results on the relation extraction datasets ADE and ACE05, and demonstrate competitive performance for both named entity recognition with GENIA and CoNLL03, and for relation extraction with SciERC and CoNLL04.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Relation Extraction ACE 2005 ITER RE Micro F1 75.1 ± 0.49 # 1
NER Micro F1 91.6 ± 0.12 # 1
RE+ Micro F1 71.9 ± 0.56 # 1
Sentence Encoder FLAN T5 3B # 1
Cross Sentence Yes # 1
Relation Extraction Adverse Drug Events (ADE) Corpus ITER RE+ Macro F1 85.6 ± 1.42 # 1
NER Macro F1 92.63 ± 0.89 # 1

Methods


No methods listed for this paper. Add relevant methods here