Extractive Question-Answering
44 papers with code • 0 benchmarks • 3 datasets
Benchmarks
These leaderboards are used to track progress in Extractive Question-Answering
Most implemented papers
LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention
In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer.
MLQA: Evaluating Cross-lingual Extractive Question Answering
An alternative to building large monolingual training datasets is to develop cross-lingual systems which can transfer to a target language without requiring training data in that language.
Learning Recurrent Span Representations for Extractive Question Answering
In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network.
MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering
Progress in cross-lingual modeling depends on challenging, realistic, and diverse evaluation sets.
MarIA: Spanish Language Models
This work presents MarIA, a family of Spanish language models and associated resources made available to the industry and the research community.
On the Multilingual Capabilities of Very Large-Scale English Language Models
Generative Pre-trained Transformers (GPTs) have recently been scaled to unprecedented sizes in the history of machine learning.
Can Explanations Be Useful for Calibrating Black Box Models?
Our approach first extracts a set of features combining human intuition about the task with model attributions generated by black box interpretation techniques, then uses a simple calibrator, in the form of a classifier, to predict whether the base model was correct or not.
COVID-19 event extraction from Twitter via extractive question answering with continuous prompts
As COVID-19 ravages the world, social media analytics could augment traditional surveys in assessing how the pandemic evolves and capturing consumer chatter that could help healthcare agencies in addressing it.
Learning to Filter Context for Retrieval-Augmented Generation
To alleviate these problems, we propose FILCO, a method that improves the quality of the context provided to the generator by (1) identifying useful context based on lexical and information-theoretic approaches, and (2) training context filtering models that can filter retrieved contexts at test time.
Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation
The meta-templates for a dataset produce training examples where the input is the unannotated text and the task attribute and the output consists of the instruction and the response.