Book summarization
8 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Book summarization
Most implemented papers
Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation.
Enhancing Large Language Model with Self-Controlled Memory Framework
Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information.
Unlimiformer: Long-Range Transformers with Unlimited Length Input
This kNN index can be kept on either the GPU or CPU memory and queried in sub-linear time; this way, we can index practically unlimited input sequences, while every attention head in every decoder layer retrieves its top-k keys, instead of attending to every key.
Echoes from Alexandria: A Large Resource for Multilingual Book Summarization
In recent years, research in text summarization has mainly focused on the news domain, where texts are typically short and have strong layout features.
LOCOST: State-Space Models for Long Document Abstractive Summarization
State-space models are a low-complexity alternative to transformers for encoding long sequences and capturing long-term dependencies.
Attention Score is not All You Need for Token Importance Indicator in KV Cache Reduction: Value Also Matters
Scaling the context size of large language models (LLMs) enables them to perform various new tasks, e. g., book summarization.
Training-Free Exponential Context Extension via Cascading KV Cache
The transformer's context window is vital for tasks such as few-shot learning and conditional generation as it preserves previous tokens for active memory.
KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark of Long Context Capable Approaches
Long context capability is a crucial competency for large language models (LLMs) as it mitigates the human struggle to digest long-form texts.