Search Results for author: Patrick Lewis

Found 33 papers, 26 papers with code

From One to Many: Expanding the Scope of Toxicity Mitigation in Language Models

1 code implementation6 Mar 2024 Luiza Pozzobon, Patrick Lewis, Sara Hooker, Beyza Ermis

To date, toxicity mitigation in language models has almost entirely been focused on single-language settings.

Cross-Lingual Transfer

MultiContrievers: Analysis of Dense Retrieval Representations

1 code implementation24 Feb 2024 Seraphina Goldfarb-Tarrant, Pedro Rodriguez, Jane Dwivedi-Yu, Patrick Lewis

Dense retrievers compress source documents into (possibly lossy) vector representations, yet there is little analysis of what information is lost versus preserved, and how it affects downstream tasks.

Retrieval

Rank-without-GPT: Building GPT-Independent Listwise Rerankers on Open-Source Large Language Models

no code implementations5 Dec 2023 Xinyu Zhang, Sebastian Hofstätter, Patrick Lewis, Raphael Tang, Jimmy Lin

However, current works in this direction all depend on the GPT models, making it a single point of failure in scientific reproducibility.

Passage Retrieval Retrieval

Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models

1 code implementation11 Oct 2023 Luiza Pozzobon, Beyza Ermis, Patrick Lewis, Sara Hooker

Considerable effort has been dedicated to mitigating toxicity, but existing methods often require drastic modifications to model parameters or the use of computationally intensive auxiliary models.

Retrieval Text Generation

On the Challenges of Using Black-Box APIs for Toxicity Evaluation in Research

1 code implementation24 Apr 2023 Luiza Pozzobon, Beyza Ermis, Patrick Lewis, Sara Hooker

We evaluate the implications of these changes on the reproducibility of findings that compare the relative merits of models and methods that aim to curb toxicity.

Mini-Model Adaptation: Efficiently Extending Pretrained Models to New Languages via Aligned Shallow Training

no code implementations20 Dec 2022 Kelly Marchisio, Patrick Lewis, Yihong Chen, Mikel Artetxe

Prior work shows that it is possible to expand pretrained Masked Language Models (MLMs) to new languages by learning a new set of embeddings, while keeping the transformer body frozen.

Cross-Lingual Transfer

Task-aware Retrieval with Instructions

1 code implementation16 Nov 2022 Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen, Gautier Izacard, Sebastian Riedel, Hannaneh Hajishirzi, Wen-tau Yih

We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries.

Retrieval

EditEval: An Instruction-Based Benchmark for Text Improvements

1 code implementation27 Sep 2022 Jane Dwivedi-Yu, Timo Schick, Zhengbao Jiang, Maria Lomeli, Patrick Lewis, Gautier Izacard, Edouard Grave, Sebastian Riedel, Fabio Petroni

Evaluation of text generation to date has primarily focused on content created sequentially, rather than improvements on a piece of text.

Text Generation

PEER: A Collaborative Language Model

no code implementations24 Aug 2022 Timo Schick, Jane Dwivedi-Yu, Zhengbao Jiang, Fabio Petroni, Patrick Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, Sebastian Riedel

Textual content is often the output of a collaborative writing process: We start with an initial draft, ask for suggestions, and repeatedly make changes.

Language Modelling

Atlas: Few-shot Learning with Retrieval Augmented Language Models

1 code implementation5 Aug 2022 Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, Edouard Grave

Retrieval augmented models are known to excel at knowledge intensive tasks without the need for as many parameters, but it is unclear whether they work in few-shot settings.

Fact Checking Few-Shot Learning +6

Improving Wikipedia Verifiability with AI

1 code implementation8 Jul 2022 Fabio Petroni, Samuel Broscheit, Aleksandra Piktus, Patrick Lewis, Gautier Izacard, Lucas Hosseini, Jane Dwivedi-Yu, Maria Lomeli, Timo Schick, Pierre-Emmanuel Mazaré, Armand Joulin, Edouard Grave, Sebastian Riedel

Hence, maintaining and improving the quality of Wikipedia references is an important challenge and there is a pressing need for better tools to assist humans in this effort.

Citation Recommendation Fact Checking

Autoregressive Search Engines: Generating Substrings as Document Identifiers

2 code implementations22 Apr 2022 Michele Bevilacqua, Giuseppe Ottaviano, Patrick Lewis, Wen-tau Yih, Sebastian Riedel, Fabio Petroni

Knowledge-intensive language tasks require NLP systems to both provide the correct answer and retrieve supporting evidence for it in a given corpus.

Information Retrieval Retrieval

Reasoning over Public and Private Data in Retrieval-Based Systems

1 code implementation14 Mar 2022 Simran Arora, Patrick Lewis, Angela Fan, Jacob Kahn, Christopher Ré

We first define the PUBLIC-PRIVATE AUTOREGRESSIVE INFORMATION RETRIEVAL (PAIR) privacy framework for the novel retrieval setting over multiple privacy scopes.

Fact Checking Information Retrieval +3

The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus

2 code implementations18 Dec 2021 Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick Lewis, Barlas Oğuz, Edouard Grave, Wen-tau Yih, Sebastian Riedel

In order to address increasing demands of real-world applications, the research for knowledge-intensive NLP (KI-NLP) should advance by capturing the challenges of a truly open-domain environment: web-scale knowledge, lack of structure, inconsistent quality and noise.

Common Sense Reasoning Retrieval

Boosted Dense Retriever

no code implementations NAACL 2022 Patrick Lewis, Barlas Oğuz, Wenhan Xiong, Fabio Petroni, Wen-tau Yih, Sebastian Riedel

DrBoost is trained in stages: each component model is learned sequentially and specialized by focusing only on retrieval mistakes made by the current ensemble.

Quantization Retrieval

Salient Phrase Aware Dense Retrieval: Can a Dense Retriever Imitate a Sparse One?

2 code implementations13 Oct 2021 Xilun Chen, Kushal Lakhotia, Barlas Oğuz, Anchit Gupta, Patrick Lewis, Stan Peshterliev, Yashar Mehdad, Sonal Gupta, Wen-tau Yih

Despite their recent popularity and well-known advantages, dense retrievers still lag behind sparse methods such as BM25 in their ability to reliably match salient phrases and rare entities in the query and to generalize to out-of-domain data.

Open-Domain Question Answering Passage Retrieval +1

A Few More Examples May Be Worth Billions of Parameters

1 code implementation8 Oct 2021 Yuval Kirstain, Patrick Lewis, Sebastian Riedel, Omer Levy

We investigate the dynamics of increasing the number of model parameters versus the number of labeled examples across a wide variety of tasks.

Extractive Question-Answering Multiple-choice +1

Challenges in Generalization in Open Domain Question Answering

1 code implementation Findings (NAACL) 2022 Linqing Liu, Patrick Lewis, Sebastian Riedel, Pontus Stenetorp

Recent work on Open Domain Question Answering has shown that there is a large discrepancy in model performance between novel test questions and those that largely overlap with training questions.

Natural Questions Open-Domain Question Answering +3

Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval

1 code implementation ICLR 2021 Wenhan Xiong, Xiang Lorraine Li, Srini Iyer, Jingfei Du, Patrick Lewis, William Yang Wang, Yashar Mehdad, Wen-tau Yih, Sebastian Riedel, Douwe Kiela, Barlas Oğuz

We propose a simple and efficient multi-hop dense retrieval approach for answering complex open-domain questions, which achieves state-of-the-art performance on two multi-hop datasets, HotpotQA and multi-evidence FEVER.

Question Answering Retrieval

How Context Affects Language Models' Factual Predictions

no code implementations AKBC 2020 Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H. Miller, Sebastian Riedel

When pre-trained on large unsupervised textual corpora, language models are able to store and retrieve factual knowledge to some extent, making it possible to use them directly for zero-shot cloze-style question answering.

Information Retrieval Language Modelling +4

Dense Passage Retrieval for Open-Domain Question Answering

17 code implementations EMNLP 2020 Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-tau Yih

Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method.

Open-Domain Question Answering Passage Retrieval +1

Unsupervised Question Decomposition for Question Answering

2 code implementations EMNLP 2020 Ethan Perez, Patrick Lewis, Wen-tau Yih, Kyunghyun Cho, Douwe Kiela

We aim to improve question answering (QA) by decomposing hard questions into simpler sub-questions that existing QA systems are capable of answering.

Question Answering

MLQA: Evaluating Cross-lingual Extractive Question Answering

4 code implementations ACL 2020 Patrick Lewis, Barlas Oğuz, Ruty Rinott, Sebastian Riedel, Holger Schwenk

An alternative to building large monolingual training datasets is to develop cross-lingual systems which can transfer to a target language without requiring training data in that language.

Extractive Question-Answering Machine Translation +1

Unsupervised Question Answering by Cloze Translation

1 code implementation ACL 2019 Patrick Lewis, Ludovic Denoyer, Sebastian Riedel

We approach this problem by first learning to generate context, question and answer triples in an unsupervised manner, which we then use to synthesize Extractive QA training data automatically.

Natural Questions NMT +2

Cannot find the paper you are looking for? You can Submit a new open access paper.