Unleashing the Power of Neural Discourse Parsers -- A Context and Structure Aware Approach Using Large Scale Pretraining

6 Nov 2020  ·  Grigorii Guz, Patrick Huber, Giuseppe Carenini ·

RST-based discourse parsing is an important NLP task with numerous downstream applications, such as summarization, machine translation and opinion mining. In this paper, we demonstrate a simple, yet highly accurate discourse parser, incorporating recent contextual language models. Our parser establishes the new state-of-the-art (SOTA) performance for predicting structure and nuclearity on two key RST datasets, RST-DT and Instr-DT. We further demonstrate that pretraining our parser on the recently available large-scale "silver-standard" discourse treebank MEGA-DT provides even larger performance benefits, suggesting a novel and promising research direction in the field of discourse analysis.

PDF Abstract

Results from the Paper


Ranked #8 on Discourse Parsing on RST-DT (Standard Parseval (Span) metric)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Discourse Parsing Instructional-DT (Instr-DT) Guz et al. (2020) (pretrained) Standard Parseval (Span) 65.41 # 10
Standard Parseval (Nuclearity) 46.59 # 9
Discourse Parsing Instructional-DT (Instr-DT) Guz et al. (2020) Standard Parseval (Span) 64.55 # 12
Standard Parseval (Nuclearity) 44.41 # 12
Discourse Parsing RST-DT Guz et al. (2020) (pretrained) Standard Parseval (Span) 72.94 # 8
Standard Parseval (Nuclearity) 61.86 # 11
Discourse Parsing RST-DT Guz et al. (2020) Standard Parseval (Span) 72.43 # 10
Standard Parseval (Nuclearity) 61.38 # 13

Methods


No methods listed for this paper. Add relevant methods here