Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.
However, as we show in this paper through experiments on standard benchmarks of link prediction in knowledge bases, ComplEx, a variant of CP, achieves similar performances to recent approaches based on Tucker decomposition on all operating points in terms of number of parameters.
Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.
Our results demonstrate the effectiveness of language model representations in boosting link prediction performance and the advantages of learning from local graph structure (+1. 5 points in MRR for ConceptNet) when training on subgraphs for computational efficiency.
We propose Dynamically Pruned Message Passing Networks (DPMPN) for large-scale knowledge graph reasoning.
In addition to that, the system uses a deep learning approach for knowledge base completion by utilizing the global structure information of the induced KG to further refine the confidence of the newly discovered relations.
State-of-the-art approaches for Knowledge Base Completion (KBC) exploit deep neural networks trained with both false and true assertions: positive assertions are explicitly taken from the knowledge base, whereas negative ones are generated by random sampling of entities.
We propose an alternative entity-pair ranking protocol that considers all model predictions as a whole and is thus more suitable to the task.
A common approach in knowledge base completion (KBC) is to learn representations for entities and relations in order to infer missing facts by generalizing existing ones.