Coreference Resolution without Span Representations

ACL 2021  ·  Yuval Kirstain, Ori Ram, Omer Levy ·

The introduction of pretrained language models has reduced many complex task-specific NLP models to simple lightweight layers. An exception to this trend is coreference resolution, where a sophisticated task-specific model is appended to a pretrained transformer encoder. While highly effective, the model has a very large memory footprint -- primarily due to dynamically-constructed span and span-pair representations -- which hinders the processing of complete documents and the ability to train on multiple instances in a single batch. We introduce a lightweight end-to-end coreference model that removes the dependency on span representations, handcrafted features, and heuristics. Our model performs competitively with the current standard model, while being simpler and more efficient.

PDF Abstract ACL 2021 PDF ACL 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Coreference Resolution CoNLL 2012 s2e + Longformer-Large Avg F1 80.3 # 6
Coreference Resolution CoNLL 2012 c2f + SpanBERT-Large Avg F1 80.2 # 7

Methods


No methods listed for this paper. Add relevant methods here