Paper

Fine-grained Information Status Classification Using Discourse Context-Aware Self-Attention

Previous work on bridging anaphora recognition (Hou et al., 2013a) casts the problem as a subtask of learning fine-grained information status (IS). However, these systems heavily depend on many hand-crafted linguistic features. In this paper, we propose a discourse context-aware self-attention neural network model for fine-grained IS classification. On the ISNotes corpus (Markert et al., 2012), our model with the contextually-encoded word representations (BERT) (Devlin et al., 2018) achieves new state-of-the-art performances on fine-grained IS classification, obtaining a 4.1% absolute overall accuracy improvement compared to Hou et al. (2013a). More importantly, we also show an improvement of 3.9% F1 for bridging anaphora recognition without using any complex hand-crafted semantic features designed for capturing the bridging phenomenon.

Results in Papers With Code
(↓ scroll down to see all results)