WiRe57 : A Fine-Grained Benchmark for Open Information Extraction

We build a reference for the task of Open Information Extraction, on five documents. We tentatively resolve a number of issues that arise, including inference and granularity. We seek to better pinpoint the requirements for the task. We produce our annotation guidelines specifying what is correct to extract and what is not. In turn, we use this reference to score existing Open IE systems. We address the non-trivial problem of evaluating the extractions produced by systems against the reference tuples, and share our evaluation script. Among seven compared extractors, we find the MinIE system to perform best.

PDF Abstract WS 2019 PDF WS 2019 Abstract

Datasets


Introduced in the Paper:

WiRe57

Used in the Paper:

QA-SRL
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Open Information Extraction WiRe57 MinIE Gashteovski et al. (2017) F1 35.8 # 4
Open Information Extraction WiRe57 PropS Stanovsky et al. (2016) F1 18.7 # 17
Open Information Extraction WiRe57 OpenIE 4 Mausam (2016) F1 26.7 # 12
Open Information Extraction WiRe57 Stanford Angeli et al. (2015) F1 19.8 # 16
Open Information Extraction WiRe57 ClausIE Del Corro and Gemulla (2013) F1 34.2 # 8
Open Information Extraction WiRe57 Ollie Mausam et al. (2012) F1 23.9 # 14
Open Information Extraction WiRe57 ReVerb Fader et al. (2011) F1 20 # 15

Methods


No methods listed for this paper. Add relevant methods here