Neural RST-based Evaluation of Discourse Coherence

This paper evaluates the utility of Rhetorical Structure Theory (RST) trees and relations in discourse coherence evaluation. We show that incorporating silver-standard RST features can increase accuracy when classifying coherence. We demonstrate this through our tree-recursive neural model, namely RST-Recursive, which takes advantage of the text's RST features produced by a state of the art RST parser. We evaluate our approach on the Grammarly Corpus for Discourse Coherence (GCDC) and show that when ensembled with the current state of the art, we can achieve the new state of the art accuracy on this benchmark. Furthermore, when deployed alone, RST-Recursive achieves competitive accuracy while having 62% fewer parameters.

PDF Abstract Asian Chapter 2020 PDF Asian Chapter 2020 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Coherence Evaluation GCDC + RST - Accuracy RST-Recursive Accuracy 53.04 # 4
Coherence Evaluation GCDC + RST - Accuracy RST-Ensemble Accuracy 55.39 # 2
Coherence Evaluation GCDC + RST - F1 RST-Ensemble Average F1 46.98 # 1
Coherence Evaluation GCDC + RST - F1 RST-Recursive Average F1 44.30 # 3

Methods


No methods listed for this paper. Add relevant methods here