Sentence compression produces a shorter sentence by removing redundant information, preserving the grammatically and the important content of the original sentence.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models.
#4 best model for Dependency Parsing on Penn Treebank
Sentence simplification aims to make sentences easier to read and understand.
The proposed model does not require parallel text-summary pairs, achieving promising results in unsupervised sentence compression on benchmark datasets.
Discourse segmentation, which segments texts into Elementary Discourse Units, is a fundamental step in discourse analysis.
Current research in text simplification has been hampered by two central problems: (i) the small amount of high-quality parallel simplification data available, and (ii) the lack of explicit annotations of simplification operations, such as deletions or substitutions, on existing data.
We present a fully unsupervised, extractive text summarization system that leverages a submodularity framework introduced by past research.
We introduce a novel graph-based framework for abstractive meeting speech summarization that is fully unsupervised and does not rely on any annotations.
SOTA for Meeting Summarization on 300W (using extra training data)
In this paper, we propose an explicit sentence compression method to enhance the source sentence representation for NMT.