Enriching Pre-trained Language Model with Entity Information for Relation Classification

20 May 2019  ·  Shanchan Wu, Yifan He ·

Relation classification is an important NLP task to extract relations between entities. The state-of-the-art methods for relation classification are primarily based on Convolutional or Recurrent Neural Networks. Recently, the pre-trained BERT model achieves very successful results in many NLP classification / sequence labeling tasks. Relation classification differs from those tasks in that it relies on information of both the sentence and the two target entities. In this paper, we propose a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task. We locate the target entities and transfer the information through the pre-trained architecture and incorporate the corresponding encoding of the two entities. We achieve significant improvement over the state-of-the-art method on the SemEval-2010 task 8 relational dataset.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Relation Extraction SemEval-2010 Task-8 R-BERT F1 89.25 # 15
Relation Extraction TACRED R-BERT F1 69.4 # 25

Methods