On Evaluating Embedding Models for Knowledge Base Completion

Knowledge bases contribute to many web search and mining tasks, yet they are often incomplete. To add missing facts to a given knowledge base, various embedding models have been proposed in the recent literature. Perhaps surprisingly, relatively simple models with limited expressiveness often performed remarkably well under today's most commonly used evaluation protocols. In this paper, we explore whether recent models work well for knowledge base completion and argue that the current evaluation protocols are more suited for question answering rather than knowledge base completion. We show that when focusing on a different prediction task for evaluating knowledge base completion, the performance of current embedding models is unsatisfactory even on datasets previously thought to be too easy. This is especially true when embedding models are compared against a simple rule-based baseline. This work indicates the need for more research into the embedding models and evaluation protocols for knowledge base completion.

PDF Abstract WS 2019 PDF WS 2019 Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here