DMRST: A Joint Framework for Document-Level Multilingual RST Discourse Segmentation and Parsing

CODI 2021  ·  Zhengyuan Liu, Ke Shi, Nancy F. Chen ·

Text discourse parsing weighs importantly in understanding information flow and argumentative structure in natural language, making it beneficial for downstream tasks. While previous work significantly improves the performance of RST discourse parsing, they are not readily applicable to practical use cases: (1) EDU segmentation is not integrated into most existing tree parsing frameworks, thus it is not straightforward to apply such models on newly-coming data. (2) Most parsers cannot be used in multilingual scenarios, because they are developed only in English. (3) Parsers trained from single-domain treebanks do not generalize well on out-of-domain inputs. In this work, we propose a document-level multilingual RST discourse parsing framework, which conducts EDU segmentation and discourse tree parsing jointly. Moreover, we propose a cross-translation augmentation strategy to enable the framework to support multilingual parsing and improve its domain generality. Experimental results show that our model achieves state-of-the-art performance on document-level multilingual RST parsing in all sub-tasks.

PDF Abstract CODI 2021 PDF CODI 2021 Abstract

Datasets


Results from the Paper


Ranked #2 on End-to-End RST Parsing on RST-DT (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
End-to-End RST Parsing RST-DT DMRST (2021) + Cross-translation Standard Parseval (Full) 50.1 # 2
Standard Parseval (Span) 70.4 # 2
Standard Parseval (Nuclearity) 60.6 # 2
Standard Parseval (Relation) 51.6 # 2
End-to-End RST Parsing RST-DT DMRST (2021) Standard Parseval (Full) 48.6 # 3
Standard Parseval (Span) 69.8 # 3
Standard Parseval (Nuclearity) 59.4 # 3
Standard Parseval (Relation) 49.4 # 3

Methods


No methods listed for this paper. Add relevant methods here