MUSS: Multilingual Unsupervised Sentence Simplification by Mining Paraphrases

Progress in sentence simplification has been hindered by a lack of labeled parallel simplification data, particularly in languages other than English. We introduce MUSS, a Multilingual Unsupervised Sentence Simplification system that does not require labeled simplification data. MUSS uses a novel approach to sentence simplification that trains strong models using sentence-level paraphrase data instead of proper simplification data. These models leverage unsupervised pretraining and controllable generation mechanisms to flexibly adjust attributes such as length and lexical complexity at inference time. We further present a method to mine such paraphrase data in any language from Common Crawl using semantic sentence embeddings, thus removing the need for labeled data. We evaluate our approach on English, French, and Spanish simplification benchmarks and closely match or outperform the previous best supervised results, despite not using any labeled simplification data. We push the state of the art further by incorporating labeled simplification data.

PDF Abstract LREC 2022 PDF LREC 2022 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Simplification ASSET MUSS (BART+ACCESS Supervised) SARI (EASSE>=0.2.1) 44.15 # 2
BLEU 72.98 # 2
FKGL 6.05 # 2
Text Simplification ASSET MUSS (BART+ACCESS Unsupervised) SARI (EASSE>=0.2.1) 42.65 # 5
FKGL 8.23 # 4
Text Simplification TurkCorpus MUSS (BART+ACCESS Unsupervised) SARI (EASSE>=0.2.1) 40.85 # 6
FKGL 8.79 # 3
Text Simplification TurkCorpus MUSS (BART+ACCESS Supervised) SARI (EASSE>=0.2.1) 42.53 # 2
BLEU 78.17 # 8
FKGL 7.60 # 1

Methods


No methods listed for this paper. Add relevant methods here