20 papers with code • 1 benchmarks • 2 datasets
Sentence compression produces a shorter sentence by removing redundant information, preserving the grammatically and the important content of the original sentence.
Unsupervised Abstractive Meeting Summarization with Multi-Sentence Compression and Budgeted Submodular Maximization
We introduce a novel graph-based framework for abstractive meeting speech summarization that is fully unsupervised and does not rely on any annotations.
Our model is a simple feed-forward neural network that operates on a task-specific transition system, yet achieves comparable or better accuracies than recurrent models.
We present a fully unsupervised, extractive text summarization system that leverages a submodularity framework introduced by past research.
Current research in text simplification has been hampered by two central problems: (i) the small amount of high-quality parallel simplification data available, and (ii) the lack of explicit annotations of simplification operations, such as deletions or substitutions, on existing data.
In sentence compression, the task of shortening sentences while retaining the original meaning, models tend to be trained on large corpora containing pairs of verbose and compressed sentences.
In this paper we advocate the use of bilingual corpora which are abundantly available for training sentence compression models.