Fine-tune Bert for DocRED with Two-step Process

26 Sep 2019  ·  Hong Wang, Christfried Focke, Rob Sylvester, Nilesh Mishra, William Wang ·

Modelling relations between multiple entities has attracted increasing attention recently, and a new dataset called DocRED has been collected in order to accelerate the research on the document-level relation extraction. Current baselines for this task uses BiLSTM to encode the whole document and are trained from scratch. We argue that such simple baselines are not strong enough to model to complex interaction between entities. In this paper, we further apply a pre-trained language model (BERT) to provide a stronger baseline for this task. We also find that solving this task in phases can further improve the performance. The first step is to predict whether or not two entities have a relation, the second step is to predict the specific relation.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Relation Extraction DocRED Two-Step+BERT-base F1 53.92 # 55
Ign F1 54.42 # 47
Relation Extraction DocRED BERT-base F1 53.22 # 57
Ign F1 56.17 # 43

Methods