A Masked Image Reconstruction Network for Document-level Relation Extraction

21 Apr 2022  ·  Liang Zhang, Yidong Cheng ·

Document-level relation extraction aims to extract relations among entities within a document. Compared with its sentence-level counterpart, Document-level relation extraction requires inference over multiple sentences to extract complex relational triples. Previous research normally complete reasoning through information propagation on the mention-level or entity-level document-graphs, regardless of the correlations between the relationships. In this paper, we propose a novel Document-level Relation Extraction model based on a Masked Image Reconstruction network (DRE-MIR), which models inference as a masked image reconstruction problem to capture the correlations between relationships. Specifically, we first leverage an encoder module to get the features of entities and construct the entity-pair matrix based on the features. After that, we look on the entity-pair matrix as an image and then randomly mask it and restore it through an inference module to capture the correlations between the relationships. We evaluate our model on three public document-level relation extraction datasets, i.e. DocRED, CDR, and GDA. Experimental results demonstrate that our model achieves state-of-the-art performance on these three datasets and has excellent robustness against the noises during the inference process.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Relation Extraction CDR DRE-MIR-SciBERT F1 76.6 # 3
Relation Extraction DocRED DRE-MIR-BERTbase F1 63.15 # 10
Ign F1 61.03 # 10
Relation Extraction GDA DRE-MIR-SciBERT F1 86.4 # 3

Methods


No methods listed for this paper. Add relevant methods here