Search Results for author: Gautier Izacard

Found 19 papers, 14 papers with code

Task-aware Retrieval with Instructions

1 code implementation16 Nov 2022 Akari Asai, Timo Schick, Patrick Lewis, Xilun Chen, Gautier Izacard, Sebastian Riedel, Hannaneh Hajishirzi, Wen-tau Yih

We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries.


EditEval: An Instruction-Based Benchmark for Text Improvements

1 code implementation27 Sep 2022 Jane Dwivedi-Yu, Timo Schick, Zhengbao Jiang, Maria Lomeli, Patrick Lewis, Gautier Izacard, Edouard Grave, Sebastian Riedel, Fabio Petroni

Evaluation of text generation to date has primarily focused on content created sequentially, rather than improvements on a piece of text.

Text Generation

PEER: A Collaborative Language Model

no code implementations24 Aug 2022 Timo Schick, Jane Dwivedi-Yu, Zhengbao Jiang, Fabio Petroni, Patrick Lewis, Gautier Izacard, Qingfei You, Christoforos Nalmpantis, Edouard Grave, Sebastian Riedel

Textual content is often the output of a collaborative writing process: We start with an initial draft, ask for suggestions, and repeatedly make changes.

Language Modelling

Atlas: Few-shot Learning with Retrieval Augmented Language Models

1 code implementation5 Aug 2022 Gautier Izacard, Patrick Lewis, Maria Lomeli, Lucas Hosseini, Fabio Petroni, Timo Schick, Jane Dwivedi-Yu, Armand Joulin, Sebastian Riedel, Edouard Grave

Retrieval augmented models are known to excel at knowledge intensive tasks without the need for as many parameters, but it is unclear whether they work in few-shot settings.

Fact Checking Few-Shot Learning +6

Improving Wikipedia Verifiability with AI

1 code implementation8 Jul 2022 Fabio Petroni, Samuel Broscheit, Aleksandra Piktus, Patrick Lewis, Gautier Izacard, Lucas Hosseini, Jane Dwivedi-Yu, Maria Lomeli, Timo Schick, Pierre-Emmanuel Mazaré, Armand Joulin, Edouard Grave, Sebastian Riedel

Hence, maintaining and improving the quality of Wikipedia references is an important challenge and there is a pressing need for better tools to assist humans in this effort.

Citation Recommendation Fact Checking

Are Large-scale Datasets Necessary for Self-Supervised Pre-training?

no code implementations20 Dec 2021 Alaaeldin El-Nouby, Gautier Izacard, Hugo Touvron, Ivan Laptev, Hervé Jegou, Edouard Grave

Our study shows that denoising autoencoders, such as BEiT or a variant that we introduce in this paper, are more robust to the type and size of the pre-training data than popular self-supervised methods trained by comparing image embeddings. We obtain competitive performance compared to ImageNet pre-training on a variety of classification datasets, from different domains.

Denoising Instance Segmentation +1

The Web Is Your Oyster -- Knowledge-Intensive NLP against a Very Large Web Corpus

2 code implementations18 Dec 2021 Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Dmytro Okhonko, Samuel Broscheit, Gautier Izacard, Patrick Lewis, Barlas Oğuz, Edouard Grave, Wen-tau Yih, Sebastian Riedel

In order to address increasing demands of real-world applications, the research for knowledge-intensive NLP (KI-NLP) should advance by capturing the challenges of a truly open-domain environment: web-scale knowledge, lack of structure, inconsistent quality and noise.

Common Sense Reasoning Retrieval

Unsupervised Dense Information Retrieval with Contrastive Learning

6 code implementations16 Dec 2021 Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, Edouard Grave

In this work, we explore the limits of contrastive learning as a way to train unsupervised dense retrievers and show that it leads to strong performance in various retrieval settings.

Contrastive Learning Cross-Lingual Transfer +4

Contrastive Pre-training for Zero-Shot Information Retrieval

no code implementations29 Sep 2021 Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, Edouard Grave

By contrast, in many other NLP tasks, conventional self-supervised pre-training based on masking leads to strong generalization with small number of training examples.

Contrastive Learning Fact Checking +3

A Memory Efficient Baseline for Open Domain Question Answering

1 code implementation30 Dec 2020 Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Sebastian Riedel, Edouard Grave

Recently, retrieval systems based on dense representations have led to important improvements in open-domain question answering, and related tasks.

Dimensionality Reduction Open-Domain Question Answering +3

Distilling Knowledge from Reader to Retriever for Question Answering

4 code implementations ICLR 2021 Gautier Izacard, Edouard Grave

A challenge of using such methods is to obtain supervised data to train the retriever model, corresponding to pairs of query and support documents.

Information Retrieval Knowledge Distillation +2

Data-driven Estimation of Sinusoid Frequencies

2 code implementations NeurIPS 2019 Gautier Izacard, Sreyas Mohan, Carlos Fernandez-Granda

Frequency estimation is a fundamental problem in signal processing, with applications in radar imaging, underwater acoustics, seismic imaging, and spectroscopy.

Position Seismic Imaging

A Learning-Based Framework for Line-Spectra Super-resolution

1 code implementation14 Nov 2018 Gautier Izacard, Brett Bernstein, Carlos Fernandez-Granda

We propose a learning-based approach for estimating the spectrum of a multisinusoidal signal from a finite number of samples.


Cannot find the paper you are looking for? You can Submit a new open access paper.