Text Simplification by Tagging

Edit-based approaches have recently shown promising results on multiple monolingual sequence transduction tasks. In contrast to conventional sequence-to-sequence (Seq2Seq) models, which learn to generate text from scratch as they are trained on parallel corpora, these methods have proven to be much more effective since they are able to learn to make fast and accurate transformations while leveraging powerful pre-trained language models. Inspired by these ideas, we present TST, a simple and efficient Text Simplification system based on sequence Tagging, leveraging pre-trained Transformer-based encoders. Our system makes simplistic data augmentations and tweaks in training and inference on a pre-existing system, which makes it less reliant on large amounts of parallel training data, provides more control over the outputs and enables faster inference speeds. Our best model achieves near state-of-the-art performance on benchmark test datasets for the task. Since it is fully non-autoregressive, it achieves faster inference speeds by over 11 times than the current state-of-the-art text simplification system.

PDF Abstract EACL (BEA) 2021 PDF EACL (BEA) 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Simplification ASSET TST SARI (EASSE>=0.2.1) 43.21 # 3
Text Simplification PWKP / WikiSmall TST SARI 44.67 # 1
SARI (EASSE>=0.2.1) 44.67 # 1
Text Simplification TurkCorpus TST SARI (EASSE>=0.2.1) 41.46 # 3

Methods


No methods listed for this paper. Add relevant methods here