Abstractive Text Summarization is the task of generating a short and concise summary that captures the salient ideas of the source text. The generated summaries potentially contain new phrases and sentences that may not appear in the source text.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.
Ranked #1 on Machine Translation on IWSLT2015 English-German
This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism.
Ranked #3 on Abstractive Text Summarization on CNN / Daily Mail (using extra training data)
Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization.
Ranked #1 on Text Summarization on X-Sum
We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.
Ranked #2 on Text Summarization on X-Sum
We show results for extractive and human baselines to demonstrate a large abstractive gap in performance.
We predict separate convolution kernels based solely on the current time-step in order to determine the importance of context elements.
Ranked #1 on Machine Translation on WMT 2017 English-Chinese
There has been much recent work on training neural attention models at the sequence-level using either reinforcement learning-style methods or by optimizing the beam.
Ranked #4 on Machine Translation on IWSLT2015 German-English
Although widely adopted, existing approaches for fine-tuning pre-trained language models have been shown to be unstable across hyper-parameter settings, motivating recent work on trust region methods.
Ranked #1 on Abstractive Text Summarization on CNN / Daily Mail
Language model (LM) pre-training has resulted in impressive performance and sample efficiency on a variety of language understanding tasks.