Long-Context Understanding
13 papers with code • 2 benchmarks • 0 datasets
This task has no description! Would you like to contribute one?
Most implemented papers
RULER: What's the Real Context Size of Your Long-Context Language Models?
Despite achieving nearly perfect accuracy in the vanilla NIAH test, all models exhibit large performance drops as the context length increases.
Hierarchical Context Merging: Better Long Context Understanding for Pre-trained LLMs
Large language models (LLMs) have shown remarkable performance in various natural language processing tasks.
Retrieval Head Mechanistically Explains Long-Context Factuality
Despite the recent progress in long-context language models, it remains elusive how transformer-based models exhibit the capability to retrieve relevant information from arbitrary locations within the long context.