Explaining Knowledge Graph Embedding via Latent Rule Learning

29 Sep 2021  ·  Wen Zhang, Mingyang Chen, Zezhong Xu, Yushan Zhu, Huajun Chen ·

Knowledge Graph Embeddings (KGEs) embed entities and relations into continuous vector space following certain assumption, and are a powerful tools for representation learning of knowledge graphs. However, following vector space assumptions makes KGE a one step reasoner that directly predict final results without reasonable multi-hop reasoning steps. Thus KGEs are black-box models and explaining predictions made by KGEs remains unsolved. In this paper, we propose KGExplainer, the first general approach of providing explanations for predictions from KGE models. KGExplainer is a multi-hop reasoner learning latent rules for link prediction and is encouraged to behave similarly to KGEs during prediction through knowledge distillation. For explanation, KGExplainer outputs a ranked list of rules for each relation. Experiments on benchmark datasets with two target KGEs show that our approach is faithfulness to replicate KGEs behaviors for link prediction and is good at outputting quality rules for effective explanations.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here