Extractive Question-Answering

39 papers with code • 0 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Look at the First Sentence: Position Bias in Question Answering

dmis-lab/position-bias EMNLP 2020

In this study, we hypothesize that when the distribution of the answer positions is highly skewed in the training set (e. g., answers lie only in the k-th sentence of each passage), QA models predicting answers as positions can learn spurious positional cues and fail to give answers in different positions.

Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering

hao-cheng/ds_doc_qa ACL 2020

We address the problem of extractive question answering using document-level distant super-vision, pairing questions and relevant documents with answer strings.

Rethinking the Objectives of Extractive Question Answering

KNOT-FIT-BUT/JointSpanExtraction EMNLP (MRQA) 2021

Therefore we propose multiple approaches to modelling joint probability $P(a_s, a_e)$ directly.

Cooperative Self-training of Machine Reading Comprehension

luohongyin/RGX NAACL 2022

Pretrained language models have significantly improved the performance of downstream language understanding tasks, including extractive question answering, by providing high-quality contextualized word embeddings.

Sequence tagging for biomedical extractive question answering

dmis-lab/seqtagqa 15 Apr 2021

Following general domain EQA models, current biomedical EQA (BioEQA) models utilize the single-span extraction setting with post-processing steps.

How Optimal is Greedy Decoding for Extractive Question Answering?

ocastel/exact-extract 12 Aug 2021

However, this approach does not ensure that the answer is a span in the given passage, nor does it guarantee that it is the most probable one.

ReasonBERT: Pre-trained to Reason with Distant Supervision

sunlab-osu/reasonbert EMNLP 2021

We present ReasonBert, a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid contexts.

A Few More Examples May Be Worth Billions of Parameters

yuvalkirstain/lm-evaluation-harness 8 Oct 2021

We investigate the dynamics of increasing the number of model parameters versus the number of labeled examples across a wide variety of tasks.

Semantic Search as Extractive Paraphrase Span Detection

turkunlp/paraphrase-span-detection 9 Dec 2021

In this paper, we approach the problem of semantic search by framing the search task as paraphrase span detection, i. e. given a segment of text as a query phrase, the task is to identify its paraphrase in a given document, the same modelling setup as typically used in extractive question answering.