Knowledge Base Completion

64 papers with code • 0 benchmarks • 2 datasets

Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.

Scalable knowledge base completion with superposition memories

matthiasrlalisse/hmemnetworks 24 Oct 2021

We present Harmonic Memory Networks (HMem), a neural architecture for knowledge base completion that models entities as weighted sums of pairwise bindings between an entity's neighbors and corresponding relations.

0
24 Oct 2021

Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations

facebookresearch/ssl-relation-prediction AKBC 2021

Learning good representations on multi-relational graphs is essential to knowledge base completion (KBC).

106
06 Oct 2021

Knowledge Base Completion Meets Transfer Learning

vid-koci/kbctransferlearning EMNLP 2021

The aim of knowledge base completion is to predict unseen facts from existing facts in knowledge bases.

14
30 Aug 2021

Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study

rahuln/lm-bio-kgc AKBC 2021

Biomedical knowledge graphs (KGs) hold rich information on entities such as diseases, drugs, and genes.

45
17 Jun 2021

BERTnesia: Investigating the capture and forgetting of knowledge in BERT

jwallat/knowledge-probing EMNLP (BlackboxNLP) 2020

We found that ranking models forget the least and retain more knowledge in their final layer compared to masked language modeling and question-answering.

8
05 Jun 2021

QuatDE: Dynamic Quaternion Embedding for Knowledge Graph Completion

hopkin-ghp/QuatDE 19 May 2021

Knowledge graph embedding has been an active research topic for knowledge base completion (KGC), with progressive improvement from the initial TransE, TransH, RotatE et al to the current state-of-the-art QuatE.

10
19 May 2021

K-PLUG: Knowledge-injected Pre-trained Language Model for Natural Language Understanding and Generation in E-Commerce

xu-song/k-plug Findings (EMNLP) 2021

K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.

30
14 Apr 2021

Ranking vs. Classifying: Measuring Knowledge Base Completion Quality

marina-sp/classification_lp AKBC 2020

We randomly remove some of these correct answers from the data set, simulating the realistic scenario of real-world entities missing from a KB.

4
02 Feb 2021

K-PLUG: KNOWLEDGE-INJECTED PRE-TRAINED LANGUAGE MODEL FOR NATURAL LANGUAGE UNDERSTANDING AND GENERATION

xu-song/k-plug 1 Jan 2021

K-PLUG achieves new state-of-the-art results on a suite of domain-specific NLP tasks, including product knowledge base completion, abstractive product summarization, and multi-turn dialogue, significantly outperforms baselines across the board, which demonstrates that the proposed method effectively learns a diverse set of domain-specific knowledge for both language understanding and generation tasks.

30
01 Jan 2021

IntKB: A Verifiable Interactive Framework for Knowledge Base Completion

bernhard2202/intkb COLING 2020

(ii) Our system is designed such that it continuously learns during the KB completion task and, therefore, significantly improves its performance upon initial zero- and few-shot relations over time.

8
01 Dec 2020