Extractive Question-Answering
39 papers with code • 0 benchmarks • 2 datasets
Benchmarks
These leaderboards are used to track progress in Extractive Question-Answering
Most implemented papers
Look at the First Sentence: Position Bias in Question Answering
In this study, we hypothesize that when the distribution of the answer positions is highly skewed in the training set (e. g., answers lie only in the k-th sentence of each passage), QA models predicting answers as positions can learn spurious positional cues and fail to give answers in different positions.
Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering
We address the problem of extractive question answering using document-level distant super-vision, pairing questions and relevant documents with answer strings.
Rethinking the Objectives of Extractive Question Answering
Therefore we propose multiple approaches to modelling joint probability $P(a_s, a_e)$ directly.
Improving QA Generalization by Concurrent Modeling of Multiple Biases
Our framework weights each example based on the biases it contains and the strength of those biases in the training data.
Cooperative Self-training of Machine Reading Comprehension
Pretrained language models have significantly improved the performance of downstream language understanding tasks, including extractive question answering, by providing high-quality contextualized word embeddings.
Sequence tagging for biomedical extractive question answering
Following general domain EQA models, current biomedical EQA (BioEQA) models utilize the single-span extraction setting with post-processing steps.
How Optimal is Greedy Decoding for Extractive Question Answering?
However, this approach does not ensure that the answer is a span in the given passage, nor does it guarantee that it is the most probable one.
ReasonBERT: Pre-trained to Reason with Distant Supervision
We present ReasonBert, a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid contexts.
A Few More Examples May Be Worth Billions of Parameters
We investigate the dynamics of increasing the number of model parameters versus the number of labeled examples across a wide variety of tasks.
Semantic Search as Extractive Paraphrase Span Detection
In this paper, we approach the problem of semantic search by framing the search task as paraphrase span detection, i. e. given a segment of text as a query phrase, the task is to identify its paraphrase in a given document, the same modelling setup as typically used in extractive question answering.