MLMLM: Link Prediction with Mean Likelihood Masked Language Model

Knowledge Bases (KBs) are easy to query, verifiable, and interpretable. They however scale with man-hours and high-quality data. Masked Language Models (MLMs), such as BERT, scale with computing power as well as unstructured raw text data. The knowledge contained within those models is however not directly interpretable. We propose to perform link prediction with MLMs to address both the KBs scalability issues and the MLMs interpretability issues. To do that we introduce MLMLM, Mean Likelihood Masked Language Model, an approach comparing the mean likelihood of generating the different entities to perform link prediction in a tractable manner. We obtain State of the Art (SotA) results on the WN18RR dataset and the best non-entity-embedding based results on the FB15k-237 dataset. We also obtain convincing results on link prediction on previously unseen entities, making MLMLM a suitable approach to introducing new entities to a KB.

PDF Abstract Findings (ACL) 2021 PDF Findings (ACL) 2021 Abstract

Results from the Paper


Ranked #11 on Link Prediction on WN18RR (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Link Prediction FB15k-237 MLMLM MRR 0.2591 # 59
Hits@10 0.4026 # 66
Hits@3 0.2820 # 45
Hits@1 0.1871 # 50
MR 411 # 27
Link Prediction WN18RR MLMLM MRR 0.5017 # 8
Hits@10 0.611 # 11
Hits@3 0.5418 # 7
Hits@1 0.4391 # 35
MR 1603 # 12

Methods