no code implementations • LREC 2022 • Quentin Heinrich, Gautier Viaud, Wacim Belblidia
This new dataset, comprising a total of almost 80, 000 questions, makes it possible to train French Question Answering models with the ability of distinguishing unanswerable questions from answerable ones.
no code implementations • 27 Sep 2021 • Quentin Heinrich, Gautier Viaud, Wacim Belblidia
This new dataset, comprising a total of almost 80, 000 questions, makes it possible to train French Question Answering models with the ability of distinguishing unanswerable questions from answerable ones.
no code implementations • 13 Apr 2021 • Vincent Micheli, Quentin Heinrich, François Fleuret, Wacim Belblidia
Attention is a key component of the now ubiquitous pre-trained language models.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Martin d'Hoffschmidt, Wacim Belblidia, Tom Brendlé, Quentin Heinrich, Maxime Vidal
FQuAD is a French Native Reading Comprehension dataset of questions and answers on a set of Wikipedia articles that consists of 25, 000+ samples for the 1. 0 version and 60, 000+ samples for the 1. 1 version.
Ranked #1 on Question Answering on FQuAD