Inductive logic programming
43 papers with code • 1 benchmarks • 2 datasets
Libraries
Use these libraries to find Inductive logic programming models and implementationsLatest papers
Towards One-Shot Learning for Text Classification using Inductive Logic Programming
With the ever-increasing potential of AI to perform personalised tasks, it is becoming essential to develop new machine learning techniques which are data-efficient and do not require hundreds or thousands of training data.
Learning MDL logic programs from noisy data
Many inductive logic programming approaches struggle to learn programs from noisy data.
Learning Logic Specifications for Soft Policy Guidance in POMCP
In this paper, we use inductive logic programming to learn logic specifications from traces of POMCP executions, i. e., sets of belief-action pairs generated by the planner.
Generalisation Through Negation and Predicate Invention
The ability to generalise from a small number of examples is a fundamental challenge in machine learning.
Relational program synthesis with numerical reasoning
Our approach can identify numerical values in linear arithmetic fragments, such as real difference logic, and from infinite domains, such as real numbers or integers.
Differentiable Inductive Logic Programming in High-Dimensional Space
Synthesizing large logic programs through symbolic Inductive Logic Programming (ILP) typically requires intermediate definitions.
Learning programs with magic values
A magic value in a program is a constant symbol that is essential for the execution of the program but has no clear explanation for its choice.
Composition of Relational Features with an Application to Explaining Black-Box Predictors
Using a notion of explanations based on the compositional structure of features in a CRM, we provide empirical evidence on synthetic data of the ability to identify appropriate explanations; and demonstrate the use of CRMs as 'explanation machines' for black-box models that do not provide explanations for their predictions.
Explanatory machine learning for sequential human teaching
We propose a framework for the effects of sequential teaching on comprehension based on an existing definition of comprehensibility and provide evidence for support from data collected in human trials.
Learning logic programs by discovering where not to search
We use the constraints to bootstrap a constraint-driven ILP system.