RuDaS: Synthetic Datasets for Rule Learning and Evaluation Tools

16 Sep 2019  ·  Cristina Cornelio, Veronika Thost ·

Logical rules are a popular knowledge representation language in many domains, representing background knowledge and encoding information that can be derived from given facts in a compact form. However, rule formulation is a complex process that requires deep domain expertise,and is further challenged by today's often large, heterogeneous, and incomplete knowledge graphs. Several approaches for learning rules automatically, given a set of input example facts,have been proposed over time, including, more recently, neural systems. Yet, the area is missing adequate datasets and evaluation approaches: existing datasets often resemble toy examples that neither cover the various kinds of dependencies between rules nor allow for testing scalability. We present a tool for generating different kinds of datasets and for evaluating rule learning systems, including new performance measures.

PDF Abstract

Datasets


Introduced in the Paper:

RuDaS

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Inductive logic programming RuDaS AMIE+ H-Score 0.2321 # 1
R-Score 0.335 # 1
Inductive logic programming RuDaS NTP H-Score 0.0728 # 4
R-Score 0.1811 # 4
Inductive logic programming RuDaS Neural-LP H-Score 0.1025 # 3
R-Score 0.1906 # 3
Inductive logic programming RuDaS FOIL H-Score 0.152 # 2
R-Score 0.2728 # 2

Methods


No methods listed for this paper. Add relevant methods here