Streamlining Cross-Document Coreference Resolution: Evaluation and Modeling

23 Sep 2020  ·  Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, Ido Dagan ·

Recent evaluation protocols for Cross-document (CD) coreference resolution have often been inconsistent or lenient, leading to incomparable results across works and overestimation of performance. To facilitate proper future research on this task, our primary contribution is proposing a pragmatic evaluation methodology which assumes access to only raw text -- rather than assuming gold mentions, disregards singleton prediction, and addresses typical targeted settings in CD coreference resolution. Aiming to set baseline results for future research that would follow our evaluation methodology, we build the first end-to-end model for this task. Our model adapts and extends recent neural models for within-document coreference resolution to address the CD coreference setting, which outperforms state-of-the-art results by a significant margin.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Entity Cross-Document Coreference Resolution ECB+ test Rand CD-LM CoNLL F1 80.2 # 4

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Event Cross-Document Coreference Resolution ECB+ test Rand CD-LM CoNLL F1 83.5 # 5

Methods


No methods listed for this paper. Add relevant methods here