Summarization
4 papers with code • 14 benchmarks • 8 datasets
Summarization is the task of producing a shorter version of one or several documents that preserves most of the input's meaning.
Most implemented papers
Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond
In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of-the-art performance on two different corpora.
Sparsifying Transformer Models with Trainable Representation Pooling
A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-$k$ operator.
MuLD: The Multitask Long Document Benchmark
The impressive progress in NLP techniques has been driven by the development of multi-task benchmarks such as GLUE and SuperGLUE.
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles
It assesses the complexity of tasks with the Hierarchical Prompting Index (HPI), which demonstrates the cognitive competencies of LLMs across diverse datasets and offers insights into the cognitive demands that datasets place on different LLMs.