Knowledge Base Completion
64 papers with code • 0 benchmarks • 2 datasets
Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.
Benchmarks
These leaderboards are used to track progress in Knowledge Base Completion
Latest papers
Scalable knowledge base completion with superposition memories
We present Harmonic Memory Networks (HMem), a neural architecture for knowledge base completion that models entities as weighted sums of pairwise bindings between an entity's neighbors and corresponding relations.
Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations
Learning good representations on multi-relational graphs is essential to knowledge base completion (KBC).
Knowledge Base Completion Meets Transfer Learning
The aim of knowledge base completion is to predict unseen facts from existing facts in knowledge bases.
Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study
Biomedical knowledge graphs (KGs) hold rich information on entities such as diseases, drugs, and genes.
BERTnesia: Investigating the capture and forgetting of knowledge in BERT
We found that ranking models forget the least and retain more knowledge in their final layer compared to masked language modeling and question-answering.
QuatDE: Dynamic Quaternion Embedding for Knowledge Graph Completion
Knowledge graph embedding has been an active research topic for knowledge base completion (KGC), with progressive improvement from the initial TransE, TransH, RotatE et al to the current state-of-the-art QuatE.
K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce
K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.
Ranking vs. Classifying: Measuring Knowledge Base Completion Quality
We randomly remove some of these correct answers from the data set, simulating the realistic scenario of real-world entities missing from a KB.
K-PLUG: KNOWLEDGE-INJECTED PRE-TRAINED LANGUAGE MODEL FOR NATURAL LANGUAGE UNDERSTANDING AND GENERATION
K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.
IntKB: A Verifiable Interactive Framework for Knowledge Base Completion
(ii) Our system is designed such that it continuously learns during the KB completion task and, therefore, significantly improves its performance upon initial zero- and few-shot relations over time.