Embedding Knowledge Graphs Attentive to Positional and Centrality Qualities

Knowledge graphs embeddings (KGE) are lately at the center of many artificial intelligence studies due to their applicability for solving downstream tasks, including link prediction and node classification. However, most Knowledge Graph embedding models encode, into the vector space, only the local graph structure of an entity, i.e., information of the 1-hop neighborhood. Capturing not only local graph structure but global features of entities are crucial for prediction tasks on Knowledge Graphs. This work proposes a novel KGE method named Graph Feature Attentive Neural Network (GFA-NN) that computes graphical features of entities. As a consequence, the resulting embeddings are attentive to two types of global network features. First, nodes’ relative centrality is based on the observation that some of the entities are more “prominent” than the others. Second, the relative position of entities in the graph. GFA-NN computes several centrality values per entity, generates a random set of reference nodes’ entities, and computes a given entity’s shortest path to each entity in the reference set. It then learns this information through optimization of objectives specified on each of these features. We investigate GFA-NN on several link prediction benchmarks in the inductive and transductive setting and show that GFA-NN achieves on-par or better results than state-of-the-art KGE solutions.

PDF

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Link Prediction FB15k-237 GFA-NN MRR 0.338 # 42
Hits@10 0.522 # 40
MR 186 # 20
Link Property Prediction ogbl-biokg GFA-NN Test MRR 0.9011 # 2
Validation MRR 0.9011 # 2
Link Prediction WN18RR GFA-NN MRR 0.486 # 28
Hits@10 0.575 # 31
MR 3390 # 25

Methods


No methods listed for this paper. Add relevant methods here