Extractive Text Summarization
27 papers with code • 3 benchmarks • 4 datasets
Given a document, selecting a subset of the words or sentences which best represents a summary of the document.
LibrariesUse these libraries to find Extractive Text Summarization models and implementations
Neural sequence-to-sequence models have provided a viable new approach for abstractive text summarization (meaning they are not restricted to simply selecting and rearranging passages from the original text).
This paper reports on the project called Lecture Summarization Service, a python based RESTful service that utilizes the BERT model for text embeddings and KMeans clustering to identify sentences closes to the centroid for summary selection.
Summarization based on text extraction is inherently limited, but generation-style abstractive methods have proven challenging to build.
Redundancy-aware extractive summarization systems score the redundancy of the sentences to be included in a summary either jointly with their salience information or separately as an additional sentence scoring step.
Finally, we present a search engine for this dataset which is utilized extensively by members of the National Speech and Debate Association today.
Detecting novelty of an entire document is an Artificial Intelligence (AI) frontier problem that has widespread NLP applications, such as extractive document summarization, tracking development of news events, predicting impact of scholarly articles, etc.
The recent years have seen remarkable success in the use of deep neural networks on text summarization.