KELM: Knowledge Enhanced Pre-Trained Language Representations with Message Passing on Hierarchical Relational Graphs

9 Sep 2021  ·  Yinquan Lu, Haonan Lu, Guirong Fu, Qun Liu ·

Incorporating factual knowledge into pre-trained language models (PLM) such as BERT is an emerging trend in recent NLP studies. However, most of the existing methods combine the external knowledge integration module with a modified pre-training loss and re-implement the pre-training process on the large-scale corpus. Re-pretraining these models is usually resource-consuming, and difficult to adapt to another domain with a different knowledge graph (KG). Besides, those works either cannot embed knowledge context dynamically according to textual context or struggle with the knowledge ambiguity issue. In this paper, we propose a novel knowledge-aware language model framework based on fine-tuning process, which equips PLM with a unified knowledge-enhanced text graph that contains both text and multi-relational sub-graphs extracted from KG. We design a hierarchical relational-graph-based message passing mechanism, which can allow the representations of injected KG and text to mutually update each other and can dynamically select ambiguous mentioned entities that share the same text. Our empirical results show that our model can efficiently incorporate world knowledge from KGs into existing language models such as BERT, and achieve significant improvement on the machine reading comprehension (MRC) task compared with other knowledge-enhanced models.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering COPA KELM (finetuning BERT-large based single model) Accuracy 78.0 # 39
Question Answering MultiRC KELM (finetuning BERT-large based single model) F1 70.8 # 15
EM 27.2 # 10
Common Sense Reasoning ReCoRD KELM (finetuning RoBERTa-large based single model) F1 89.6 # 14
EM 89.1 # 11
Common Sense Reasoning ReCoRD KELM (finetuning BERT-large based single model) F1 76.7 # 24
EM 76.2 # 21

Methods