Shortening a set of data computationally, to create a summary that represents the most important or relevant information within the original content (Source: Wikipedia).
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration.
Ranked #1 on Machine Translation on IWSLT2015 English-German
To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear.
Ranked #1 on Text Classification on arXiv
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build.
Ranked #1 on Extractive Text Summarization on DUC 2004 Task 1
We show BARThez to be very competitive with state-of-the-art BERT-based French language models such as CamemBERT and FlauBERT.
Ranked #1 on Text Summarization on OrangeSum (using extra training data)
This paper presents a new sequence-to-sequence pre-training model called ProphetNet, which introduces a novel self-supervised objective named future n-gram prediction and the proposed n-stream self-attention mechanism.
Ranked #3 on Abstractive Text Summarization on CNN / Daily Mail (using extra training data)
Recent work pre-training Transformers with self-supervised objectives on large text corpora has shown great success when fine-tuned on downstream NLP tasks including text summarization.
Ranked #1 on Text Summarization on X-Sum
We evaluate a number of noising approaches, finding the best performance by both randomly shuffling the order of the original sentences and using a novel in-filling scheme, where spans of text are replaced with a single mask token.
Ranked #2 on Text Summarization on X-Sum
Unsupervised pre-training of large neural models has recently revolutionized Natural Language Processing.
We show results for extractive and human baselines to demonstrate a large abstractive gap in performance.
We further confirm the flexibility of our model by showing a Levenshtein Transformer trained by machine translation can straightforwardly be used for automatic post-editing.
Ranked #4 on Machine Translation on WMT2016 Romanian-English