Knowledge Base Completion
64 papers with code • 0 benchmarks • 2 datasets
Knowledge base completion is the task which automatically infers missing facts by reasoning about the information already present in the knowledge base. A knowledge base is a collection of relational facts, often represented in the form of "subject", "relation", "object"-triples.
Benchmarks
These leaderboards are used to track progress in Knowledge Base Completion
Most implemented papers
Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach
Knowledge bases are employed in a variety of applications from natural language processing to semantic web search; alas, in practice their usefulness is hurt by their incompleteness.
A Comparative Study of Distributional and Symbolic Paradigms for Relational Learning
Many real-world domains can be expressed as graphs and, more generally, as multi-relational knowledge graphs.
Type-Sensitive Knowledge Base Inference Without Explicit Type Supervision
State-of-the-art knowledge base completion (KBC) models predict a score for every known or unknown fact via a latent factorization over entity and relation embeddings.
Attributed and Predictive Entity Embedding for Fine-Grained Entity Typing in Knowledge Bases
Fine-grained entity typing aims at identifying the semantic type of an entity in KB.
Cross-lingual Knowledge Projection Using Machine Translation and Target-side Knowledge Base Completion
Considerable effort has been devoted to building commonsense knowledge bases.
Modeling relation paths for knowledge base completion via joint adversarial training
By treating relations and multi-hop paths as two different input sources, we use a feature extractor, which is shared by two downstream components (i. e. relation classifier and source discriminator), to capture shared/similar information between them.
End-to-end Structure-Aware Convolutional Networks for Knowledge Base Completion
The recent graph convolutional network (GCN) provides another way of learning graph node embedding by successfully utilizing graph connectivity structure.
Learning from positive and unlabeled data: a survey
Learning from positive and unlabeled data or PU learning is the setting where a learner only has access to positive examples and unlabeled data.
Combining Axiom Injection and Knowledge Base Completion for Efficient Natural Language Inference
In logic-based approaches to reasoning tasks such as Recognizing Textual Entailment (RTE), it is important for a system to have a large amount of knowledge data.
Fact Discovery from Knowledge Base via Facet Decomposition
We also propose a novel auto-encoder based facet component to estimate some facets of the fact.