Paraphrasing vs Coreferring: Two Sides of the Same Coin

We study the potential synergy between two different NLP tasks, both confronting predicate lexical variability: identifying predicate paraphrases, and event coreference resolution. First, we used annotations from an event coreference dataset as distant supervision to re-score heuristically-extracted predicate paraphrases. The new scoring gained more than 18 points in average precision upon their ranking by the original scoring method. Then, we used the same re-ranking features as additional inputs to a state-of-the-art event coreference resolution model, which yielded modest but consistent improvements to the model's performance. The results suggest a promising direction to leverage data and models for each of the tasks to the benefit of the other.

PDF Abstract Findings of 2020 PDF Findings of 2020 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Event Cross-Document Coreference Resolution ECB+ test Cattan et al CoNLL F1 81.0 # 6

Methods


No methods listed for this paper. Add relevant methods here