Generating a summary of a given sentence.
Multi-sentence summarization is a well studied problem in NLP, while generating image descriptions for a single image is a well studied problem in Computer Vision.
Work on summarization has explored both reinforcement learning (RL) optimization using ROUGE as a reward and syntax-aware models, such as models those input is enriched with part-of-speech (POS)-tags and dependency information.
In this paper, we propose a novel approach to unsupervised sentence summarization by mapping the Information Bottleneck principle to a conditional language modelling objective: given a sentence, our approach seeks a compressed sentence that can best predict the next sentence.
Back-translation based approaches have recently lead to significant progress in unsupervised sequence-to-sequence tasks such as machine translation or style transfer.
Sequence to sequence (seq2seq) models have become a popular framework for neural sequence prediction.
Recent neural sequence-to-sequence models have shown significant progress on short text summarization.
Ranked #27 on Abstractive Text Summarization on CNN / Daily Mail
For the second constraint, we restore the key information by copying words from the knowledge encoder with the help of the soft gating mechanism.
In this paper, we investigate the sentence summarization task that produces a summary from a source sentence.
Ranked #7 on Text Summarization on DUC 2004 Task 1
Most previous seq2seq summarization systems purely depend on the source text to generate summaries, which tends to work unstably.
Ranked #16 on Text Summarization on GigaWord
Abstractive summarization is the ultimate goal of document summarization research, but previously it is less investigated due to the immaturity of text generation techniques.
Ranked #11 on Text Summarization on CNN / Daily Mail (Anonymized)