Coreference Resolution in Research Papers from Multiple Domains

4 Jan 2021  ·  Arthur Brack, Daniel Uwe Müller, Anett Hoppe, Ralph Ewerth ·

Coreference resolution is essential for automatic text understanding to facilitate high-level information retrieval tasks such as text summarisation or question answering. Previous work indicates that the performance of state-of-the-art approaches (e.g. based on BERT) noticeably declines when applied to scientific papers. In this paper, we investigate the task of coreference resolution in research papers and subsequent knowledge graph population. We present the following contributions: (1) We annotate a corpus for coreference resolution that comprises 10 different scientific disciplines from Science, Technology, and Medicine (STM); (2) We propose transfer learning for automatic coreference resolution in research papers; (3) We analyse the impact of coreference resolution on knowledge graph (KG) population; (4) We release a research KG that is automatically populated from 55,485 papers in 10 STM domains. Comprehensive experiments show the usefulness of the proposed approach. Our transfer learning approach considerably outperforms state-of-the-art baselines on our corpus with an F1 score of 61.4 (+11.0), while the evaluation against a gold standard KG shows that coreference resolution improves the quality of the populated KG significantly with an F1 score of 63.5 (+21.8).

PDF Abstract

Results from the Paper


 Ranked #1 on Coreference Resolution on STM-coref (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Coreference Resolution STM-coref BFCR + SpanBERT + Transfer Learning CoNLL F1 61.4 # 1
Coreference Resolution STM-coref BFCR + SpanBERT CoNLL F1 50.4 # 2

Methods


No methods listed for this paper. Add relevant methods here