Multi-view and multi-task training of RST discourse parsers

We experiment with different ways of training LSTM networks to predict RST discourse trees. The main challenge for RST discourse parsing is the limited amounts of training data. We combat this by regularizing our models using task supervision from related tasks as well as alternative views on discourse structures. We show that a simple LSTM sequential discourse parser takes advantage of this multi-view and multi-task framework with 12-15{\%} error reductions over our baseline (depending on the metric) and results that rival more complex state-of-the-art parsers.

PDF Abstract COLING 2016 PDF COLING 2016 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Discourse Parsing RST-DT LSTM Sequential Discourse Parser (Braud et al., 2016) RST-Parseval (Span) 79.7* # 10
RST-Parseval (Nuclearity) 63.6* # 10
RST-Parseval (Relation) 47.7* # 10
RST-Parseval (Full) 47.5* # 5

Methods