Neural-symbolic and statistical relational artificial intelligence both integrate frameworks for learning with logical reasoning.
We argue that a key limitation of existing systems is that they use entailment to guide the hypothesis search.
Neuro-symbolic and statistical relational artificial intelligence both integrate frameworks for learning with logical reasoning.
Common criticisms of state-of-the-art machine learning include poor generalisation, a lack of interpretability, and a need for large amounts of training data.
We introduce DeepProbLog, a neural probabilistic logic programming language that incorporates deep learning by means of neural predicates.
In line with previous work on static knowledge graphs, we propose to address this problem by learning latent entity and relation type representations.
We introduce DeepProbLog, a probabilistic logic programming language that incorporates deep learning by means of neural predicates.
This background knowledge is often obtained by allowing the clustering system to pose pairwise queries to the user: should these two elements be in the same cluster or not?
This work addresses these issues and shows that (1) latent features created by clustering are interpretable and capture interesting properties of data; (2) they identify local regions of instances that match well with the label, which partially explains their benefit; and (3) although the number of latent features generated by this approach is large, often many of them are highly redundant and can be removed without hurting performance much.