We propose a new approach to generate multiple variants of the target summary with diverse content and varying lengths, then score and select admissible ones according to users' needs.
Our experiments show that CATE is beneficial to the downstream search, especially in the large search space.
Instead, we investigate several less-studied aspects of neural abstractive summarization, including (i) the importance of selecting important segments from transcripts to serve as input to the summarizer; (ii) striking a balance between the amount and quality of training instances; (iii) the appropriate summary length and start/end points.
If generating a word can introduce an erroneous relation to the summary, the behavior must be discouraged.
Ranked #21 on Text Summarization on GigaWord
In this paper, we present a neural summarization model that, by learning from single human abstracts, can produce a broad spectrum of summaries ranging from purely extractive to highly generative ones.
Ranked #10 on Text Summarization on GigaWord
There is thus a crucial gap between sentence selection and fusion to support summarizing by both compressing single sentences and fusing pairs.
In this paper, we present structure-infused copy mechanisms to facilitate copying important words and relations from the source sentence to summary sentence.
Ranked #28 on Text Summarization on GigaWord