Inductive logic programming
44 papers with code • 1 benchmarks • 2 datasets
Libraries
Use these libraries to find Inductive logic programming models and implementationsMost implemented papers
Learn to Explain Efficiently via Neural Logic Inductive Learning
The capability of making interpretable and self-explanatory decisions is essential for developing responsible machine learning systems.
Symbolic Graph Embedding using Frequent Pattern Mining
The proposed SGE approach on a venue classification task outperforms shallow node embedding methods such as DeepWalk, and performs similarly to metapath2vec, a black-box representation learner that can exploit node and edge types in a given graph.
User Friendly Automatic Construction of Background Knowledge: Mode Construction from ER Diagrams
One of the key advantages of Inductive Logic Programming systems is the ability of the domain experts to provide background knowledge as modes that allow for efficient search through the space of hypotheses.
Knowledge Refactoring for Inductive Program Synthesis
We introduce the \textit{knowledge refactoring} problem, where the goal is to restructure a learner's knowledge base to reduce its size and to minimise redundancy in it.
Learning programs by learning from failures
In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain.
Differentiable Inductive Logic Programming for Structured Examples
Our framework can be scaled to deal with complex programs that consist of several clauses with function symbols.
Predicate Invention by Learning From Failures
Discovering novel high-level concepts is one of the most important steps needed for human-level AI.
Expressive Explanations of DNNs by Combining Concept Analysis with ILP
We show that our explanation is faithful to the original black-box model.
Inclusion of Domain-Knowledge into GNNs using Mode-Directed Inverse Entailment
We also provide experimental evidence comparing BotGNNs favourably to multi-layer perceptrons (MLPs) that use features representing a "propositionalised" form of the background knowledge; and BotGNNs to a standard ILP based on the use of most-specific clauses.
FF-NSL: Feed-Forward Neural-Symbolic Learner
To address this limitation, we propose a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FFNSL), that integrates a logic-based machine learning system capable of learning from noisy examples, with neural networks, in order to learn interpretable knowledge from labelled unstructured data.