Search Results for author: David Rau

Found 7 papers, 3 papers with code

Context Embeddings for Efficient Answer Generation in RAG

no code implementations12 Jul 2024 David Rau, Shuai Wang, Hervé Déjean, Stéphane Clinchant

We address this challenge by presenting COCOM, an effective context compression method, reducing long contexts to only a handful of Context Embeddings speeding up the generation time by a large margin.

Answer Generation RAG +1

BERGEN: A Benchmarking Library for Retrieval-Augmented Generation

1 code implementation1 Jul 2024 David Rau, Hervé Déjean, Nadezhda Chirkova, Thibault Formal, Shuai Wang, Vassilina Nikoulina, Stéphane Clinchant

In response to the recent popularity of generative LLMs, many RAG approaches have been proposed, which involve an intricate number of different configurations such as evaluation datasets, collections, metrics, retrievers, and LLMs.

Benchmarking RAG +1

Retrieval-augmented generation in multilingual settings

1 code implementation1 Jul 2024 Nadezhda Chirkova, David Rau, Hervé Déjean, Thibault Formal, Stéphane Clinchant, Vassilina Nikoulina

Retrieval-augmented generation (RAG) has recently emerged as a promising solution for incorporating up-to-date or domain-specific knowledge into large language models (LLMs) and improving LLM factuality, but is predominantly studied in English-only settings.

Prompt Engineering RAG +1

The Role of Complex NLP in Transformers for Text Ranking?

no code implementations6 Jul 2022 David Rau, Jaap Kamps

Even though term-based methods such as BM25 provide strong baselines in ranking, under certain conditions they are dominated by large pre-trained masked language models (MLMs) such as BERT.

Position Re-Ranking

How Different are Pre-trained Transformers for Text Ranking?

1 code implementation5 Apr 2022 David Rau, Jaap Kamps

Our results contribute to our understanding of (black-box) neural rankers relative to (well-understood) traditional rankers, help understand the particular experimental setting of MS-Marco-based test collections.

Passage Retrieval Retrieval

On the Realization of Compositionality in Neural Networks

no code implementations WS 2019 Joris Baan, Jana Leible, Mitja Nikolaus, David Rau, Dennis Ulmer, Tim Baumgärtner, Dieuwke Hupkes, Elia Bruni

We present a detailed comparison of two types of sequence to sequence models trained to conduct a compositional task.

Point-less: More Abstractive Summarization with Pointer-Generator Networks

no code implementations18 Apr 2019 Freek Boutkan, Jorn Ranzijn, David Rau, Eelco van der Wel

The Pointer-Generator architecture has shown to be a big improvement for abstractive summarization seq2seq models.

Abstractive Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.