Sentence Simplification with Deep Reinforcement Learning

EMNLP 2017  ·  Xingxing Zhang, Mirella Lapata ·

Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call {\sc Dress} (as shorthand for {\bf D}eep {\bf RE}inforcement {\bf S}entence {\bf S}implification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.

PDF Abstract EMNLP 2017 PDF EMNLP 2017 Abstract

Datasets


Introduced in the Paper:

WikiLarge

Used in the Paper:

Newsela ASSET TurkCorpus

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Text Simplification ASSET Dress-LS SARI (EASSE>=0.2.1) 36.59 # 8
BLEU 86.39* # 3
Text Simplification Newsela DRESS-LS SARI 26.63 # 10
BLEU 24.30 # 2
Text Simplification Newsela DRESS SARI 27.37 # 8
BLEU 23.21 # 3
Text Simplification PWKP / WikiSmall DRESS-LS SARI 27.24 # 6
BLEU 36.32 # 3
Text Simplification PWKP / WikiSmall DRESS SARI 27.48 # 5
BLEU 34.53 # 4
Text Simplification TurkCorpus Dress-LS SARI (EASSE>=0.2.1) 37.27 # 13
BLEU 80.12 # 6
Text Simplification TurkCorpus Dress SARI (EASSE>=0.2.1) 37.08 # 17
BLEU 77.18 # 9

Methods


No methods listed for this paper. Add relevant methods here