BenchIE: Open Information Extraction Evaluation Based on Facts, Not Tokens

14 Sep 2021  ยท  Kiril Gashteovski, Mingying Yu, Bhushan Kotnis, Carolin Lawrence, Goran Glavas, Mathias Niepert ยท

Intrinsic evaluations of OIE systems are carried out either manually -- with human evaluators judging the correctness of extractions -- or automatically, on standardized benchmarks. The latter, while much more cost-effective, is less reliable, primarily because of the incompleteness of the existing OIE benchmarks: the ground truth extractions do not include all acceptable variants of the same fact, leading to unreliable assessment of models' performance... Moreover, the existing OIE benchmarks are available for English only. In this work, we introduce BenchIE: a benchmark and evaluation framework for comprehensive evaluation of OIE systems for English, Chinese and German. In contrast to existing OIE benchmarks, BenchIE takes into account informational equivalence of extractions: our gold standard consists of fact synsets, clusters in which we exhaustively list all surface forms of the same fact. We benchmark several state-of-the-art OIE systems using BenchIE and demonstrate that these systems are significantly less effective than indicated by existing OIE benchmarks. We make BenchIE (data and evaluation code) publicly available. read more

PDF Abstract

Datasets


Introduced in the Paper:

BenchIE
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Open Information Extraction BenchIE ClausIE Precision 0.50 # 1
Recall 0.26 # 2
F1 0.34 # 1
Open Information Extraction BenchIE MinIE Precision 0.43 # 2
Recall 0.28 # 1
Open Information Extraction BenchIE Stanford OIE Precision 0.11 # 8
Recall 0.16 # 4
F1 0.13 # 5
Open Information Extraction BenchIE ROIE-T Precision 0.37 # 4
Recall 0.08 # 7
F1 0.13 # 5
Open Information Extraction BenchIE ROIE-N Precision 0.20 # 7
Recall 0.09 # 6
F1 0.13 # 5
Open Information Extraction BenchIE OpenIE6 Precision 0.31 # 5
Recall 0.21 # 3
F1 0.25 # 2
Open Information Extraction BenchIE Naive OIE Precision 0.03 # 10
Recall 0.02 # 9
F1 0.03 # 9
Open Information Extraction BenchIE M2OIE (EN) Precision 0.39 # 3
F1 0.23 # 3
Open Information Extraction BenchIE M2OIE (ZH) Precision 0.26 # 6
Recall 0.13 # 5
F1 0.17 # 4
Open Information Extraction BenchIE M2OIE (DE) Precision 0.09 # 9
Recall 0.03 # 8
F1 0.04 # 8

Methods


No methods listed for this paper. Add relevant methods here