Self-attention Presents Low-dimensional Knowledge Graph Embeddings for Link Prediction

20 Dec 2021  ·  Peyman Baghershahi, Reshad Hosseini, Hadi Moradi ·

A few models have tried to tackle the link prediction problem, also known as knowledge graph completion, by embedding knowledge graphs in comparably lower dimensions. However, the state-of-the-art results are attained at the cost of considerably increasing the dimensionality of embeddings which causes scalability issues in the case of huge knowledge bases. Transformers have been successfully used recently as powerful encoders for knowledge graphs, but available models still have scalability issues. To address this limitation, we introduce a Transformer-based model to gain expressive low-dimensional embeddings. We utilize a large number of self-attention heads as the key to applying query-dependent projections to capture mutual information between entities and relations. Empirical results on WN18RR and FB15k-237 as standard link prediction benchmarks demonstrate that our model has favorably comparable performance with the current state-of-the-art models. Notably, we yield our promising results with a significant reduction of 66.9% in the dimensionality of embeddings compared to the five best recent state-of-the-art competitors on average.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Link Prediction FB15k-237 SAttLE MRR 0.36 # 22
Hits@10 0.545 # 20
Hits@3 0.396 # 16
Hits@1 0.268 # 16
Link Prediction WN18RR SAttLE MRR 0.491 # 18
Hits@10 0.558 # 42
Hits@3 0.508 # 20
Hits@1 0.454 # 12

Methods