Sentence compression produces a shorter sentence by removing redundant information, preserving the grammatically and the important content of the original sentence.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models.
Ranked #15 on Dependency Parsing on Penn Treebank
The proposed model does not require parallel text-summary pairs, achieving promising results in unsupervised sentence compression on benchmark datasets.
In sentence compression, the task of shortening sentences while retaining the original meaning, models tend to be trained on large corpora containing pairs of verbose and compressed sentences.
Current research in text simplification has been hampered by two central problems: (i) the small amount of high-quality parallel simplification data available, and (ii) the lack of explicit annotations of simplification operations, such as deletions or substitutions, on existing data.
Ranked #7 on Text Simplification on PWKP / WikiSmall (SARI metric)
Sentence compression is the task of compressing a long sentence into a short one by deleting redundant words.
Ranked #1 on Sentence Compression on Google Dataset
We present a fully unsupervised, extractive text summarization system that leverages a submodularity framework introduced by past research.
In this paper, we propose an explicit sentence compression method to enhance the source sentence representation for NMT.
We introduce a novel graph-based framework for abstractive meeting speech summarization that is fully unsupervised and does not rely on any annotations.