Knowledge Base Completion

64 papers with code • 0 benchmarks • 2 datasets

Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.

Latest papers with no code

DOCENT: Learning Self-Supervised Entity Representations from Large Document Collections

no code yet • EACL 2021

This enables a new class of powerful, high-capacity representations that can ultimately distill much of the useful information about an entity from multiple text sources, without any human supervision.

Modelling General Properties of Nouns by Selectively Averaging Contextualised Embeddings

no code yet • 4 Dec 2020

While the success of pre-trained language models has largely eliminated the need for high-quality static word vectors in many NLP applications, such vectors continue to play an important role in tasks where words need to be modelled in the absence of linguistic context.

Association Rules Enhanced Knowledge Graph Attention Network

no code yet • 14 Nov 2020

However, in most existing embedding methods, only fact triplets are utilized, and logical rules have not been thoroughly studied for the knowledge base completion task.

Continuous and Interactive Factual Knowledge Learning in Verification Dialogues

no code yet • NeurIPS Workshop HAMLETS 2020

In this paper, we eliminate this assumption and allow s, r and/or t to be unknown to the KB, which we call open-world knowledge base completion (OKBC).

A Survey on Graph Neural Networks for Knowledge Graph Completion

no code yet • 24 Jul 2020

Knowledge Graphs are increasingly becoming popular for a variety of downstream tasks like Question Answering and Information Retrieval.

Exploiting Structured Knowledge in Text via Graph-Guided Representation Learning

no code yet • EMNLP 2020

In contrast to existing paradigms, our approach uses knowledge graphs implicitly, only during pre-training, to inject language models with structured knowledge via learning from raw text.

Revisiting Evaluation of Knowledge Base Completion Models

no code yet • AKBC 2020

To address these issues, we gather a semi-complete KG referred as YAGO3-TC, using a random subgraph from the test and validation data of YAGO3-10, which enables us to compute accurate triple classification accuracy on this data.

Mining Commonsense Facts from the Physical World

no code yet • 8 Feb 2020

In this paper, we propose a new task of mining commonsense facts from the raw text that describes the physical world.

Knowledge Graph Embedding via Graph Attenuated Attention Networks

no code yet • IEEE Access 2019

However, these methods assign the same weights on the relation path in the knowledge graph and ignore the rich information presented in neighbor nodes, which result in incomplete mining of triple features.

Reasoning Over Paths via Knowledge Base Completion

no code yet • WS 2019

We demonstrate that our method is able to effectively rank a list of known paths between a pair of entities and also come up with plausible paths that are not present in the knowledge graph.