Retrieval-augmented Generation
723 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Retrieval-augmented Generation
Libraries
Use these libraries to find Retrieval-augmented Generation models and implementationsMost implemented papers
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks.
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
Our framework trains a single arbitrary LM that adaptively retrieves passages on-demand, and generates and reflects on retrieved passages and its own generations using special tokens, called reflection tokens.
Retrieval-Augmented Generation for Large Language Models: A Survey
Large Language Models (LLMs) showcase impressive capabilities but encounter challenges like hallucination, outdated knowledge, and non-transparent, untraceable reasoning processes.
BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack
The BABILong benchmark is extendable to any length to support the evaluation of new upcoming models with increased capabilities, and we provide splits up to 10 million token lengths.
ColPali: Efficient Document Retrieval with Vision Language Models
Documents are visually rich structures that convey information through text, but also figures, page layouts, tables, or even fonts.
RAGAS: Automated Evaluation of Retrieval Augmented Generation
We introduce RAGAs (Retrieval Augmented Generation Assessment), a framework for reference-free evaluation of Retrieval Augmented Generation (RAG) pipelines.
RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models
Retrieval-augmented generation (RAG) has become a main technique for alleviating hallucinations in large language models (LLMs).
The Power of Noise: Redefining Retrieval for RAG Systems
Retrieval-Augmented Generation (RAG) has recently emerged as a method to extend beyond the pre-trained knowledge of Large Language Models by augmenting the original prompt with relevant passages or documents retrieved by an Information Retrieval (IR) system.
Retrieval-Augmented Generation for AI-Generated Content: A Survey
We first classify RAG foundations according to how the retriever augments the generator, distilling the fundamental abstractions of the augmentation methodologies for various retrievers and generators.
From Local to Global: A Graph RAG Approach to Query-Focused Summarization
To combine the strengths of these contrasting methods, we propose a Graph RAG approach to question answering over private text corpora that scales with both the generality of user questions and the quantity of source text to be indexed.