Knowledge Base Completion
64 papers with code • 0 benchmarks • 2 datasets
Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.
Benchmarks
These leaderboards are used to track progress in Knowledge Base Completion
Latest papers with no code
DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections
This enables a new class of powerful, high-capacity representations that can ultimately distill much of the useful information about an entity from multiple text sources, without any human supervision.
Modelling General Properties of Nouns by Selectively Averaging Contextualised Embeddings
While the success of pre-trained language models has largely eliminated the need for high-quality static word vectors in many NLP applications, such vectors continue to play an important role in tasks where words need to be modelled in the absence of linguistic context.
Association Rules Enhanced Knowledge Graph Attention Network
However, in most existing embedding methods, only fact triplets are utilized, and logical rules have not been thoroughly studied for the knowledge base completion task.
Continuous and Interactive Factual Knowledge Learning in Verification Dialogues
In this paper, we eliminate this assumption and allow s, r and/or t to be unknown to the KB, which we call open-world knowledge base completion (OKBC).
A Survey on Graph Neural Networks for Knowledge Graph Completion
Knowledge Graphs are increasingly becoming popular for a variety of downstream tasks like Question Answering and Information Retrieval.
Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning
In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training, to inject language models with structured knowledge via learning from raw text.
Revisiting Evaluation of Knowledge Base Completion Models
To address these issues, we gather a semi-complete KG referred as YAGO3-TC, using a random subgraph from the test and validation data of YAGO3-10, which enables us to compute accurate triple classification accuracy on this data.
Mining Commonsense Facts from the Physical World
In this paper, we propose a new task of mining commonsense facts from the raw text that describes the physical world.
Knowledge Graph Embedding via Graph Attenuated Attention Networks
However, these methods assign the same weights on the relation path in the knowledge graph and ignore the rich information presented in neighbor nodes, which result in incomplete mining of triple features.
Reasoning Over Paths via Knowledge Base Completion
We demonstrate that our method is able to effectively rank a list of known paths between a pair of entities and also come up with plausible paths that are not present in the knowledge graph.