|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
BERT, a pre-trained Transformer model, has achieved ground-breaking performance on multiple NLP tasks.
#2 best model for Extractive Document Summarization on CNN / Daily Mail
BERT (Devlin et al., 2018), a pre-trained Transformer (Vaswani et al., 2017) model, has achieved ground-breaking performance on multiple NLP tasks.
#3 best model for Document Summarization on CNN / Daily Mail (using extra training data)
For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not).
SOTA for Document Summarization on CNN / Daily Mail (using extra training data)
In this paper, we present a novel end-to-end neural network framework for extractive document summarization by jointly learning to score and select sentences.
In this paper, we introduce Iterative Text Summarization (ITS), an iteration-based model for supervised extractive text summarization, inspired by the observation that it is often necessary for a human to read an article multiple times in order to fully understand and summarize its contents.
We propose DeepChannel, a robust, data-efficient, and interpretable neural model for extractive document summarization.
Detecting novelty of an entire document is an Artificial Intelligence (AI) frontier problem that has widespread NLP applications, such as extractive document summarization, tracking development of news events, predicting impact of scholarly articles, etc.
Most general-purpose extractive summarization models are trained on news articles, which are short and present all important information upfront.