Document-level Event Extraction
13 papers with code • 1 benchmarks • 1 datasets
LibrariesUse these libraries to find Document-level Event Extraction models and implementations
Most existing event extraction (EE) methods merely extract event arguments within the sentence scope.
Existing methods are not effective due to two challenges of this task: a) the target event arguments are scattered across sentences; b) the correlation among events in a document is non-trivial to model.
We argue that sentence-level extractors are ill-suited to the DEE task where event arguments always scatter across sentences and multiple events may co-exist in a document.
Few works in the literature of event extraction have gone beyond individual sentences to make extraction decisions.
On the task of argument extraction, we achieve an absolute gain of 7. 6% F1 and 5. 7% F1 over the next best model on the RAMS and WikiEvents datasets respectively.
Finally, our system ranks No. 4 on the test set leader-board of this multi-format information extraction task, and its F1 scores for the subtasks of relation extraction, event extractions of sentence-level and document-level are 79. 887%, 85. 179%, and 70. 828% respectively.
Most previous studies of document-level event extraction mainly focus on building argument chains in an autoregressive way, which achieves a certain success but is inefficient in both training and inference.
In this paper, we focus on extracting event arguments from an entire document, which mainly faces two critical problems: a) the long-distance dependency between trigger and arguments over sentences; b) the distracting context towards an event in the document.
RAAT: Relation-Augmented Attention Transformer for Relation Modeling in Document-Level Event Extraction
To further leverage relation information, we introduce a separate event relation prediction task and adopt multi-task learning method to explicitly enhance event extraction performance.