Extractive Question-Answering

25 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Datasets


Most implemented papers

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

studio-ousia/luke EMNLP 2020

In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer.

MLQA: Evaluating Cross-lingual Extractive Question Answering

facebookresearch/MLQA ACL 2020

An alternative to building large monolingual training datasets is to develop cross-lingual systems which can transfer to a target language without requiring training data in that language.

MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering

apple/ml-mkqa 30 Jul 2020

Progress in cross-lingual modeling depends on challenging, realistic, and diverse evaluation sets.

MarIA: Spanish Language Models

PlanTL-GOB-ES/lm-spanish 15 Jul 2021

This work presents MarIA, a family of Spanish language models and associated resources made available to the industry and the research community.

On the Multilingual Capabilities of Very Large-Scale English Language Models

temu-bsc/gpt3-queries LREC 2022

Generative Pre-trained Transformers (GPTs) have recently been scaled to unprecedented sizes in the history of machine learning.

Learning Recurrent Span Representations for Extractive Question Answering

shimisalant/RaSoR 4 Nov 2016

In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network.

Gendered Pronoun Resolution using BERT and an extractive question answering formulation

rakeshchada/corefqa WS 2019

In this paper, we propose an extractive question answering (QA) formulation of pronoun resolution task that overcomes this limitation and shows much lower gender bias (0. 99) on their dataset.

Look at the First Sentence: Position Bias in Question Answering

dmis-lab/position-bias EMNLP 2020

In this study, we hypothesize that when the distribution of the answer positions is highly skewed in the training set (e. g., answers lie only in the k-th sentence of each passage), QA models predicting answers as positions can learn spurious positional cues and fail to give answers in different positions.

Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering

hao-cheng/ds_doc_qa ACL 2020

We address the problem of extractive question answering using document-level distant super-vision, pairing questions and relevant documents with answer strings.

Rethinking the Objectives of Extractive Question Answering

KNOT-FIT-BUT/JointSpanExtraction EMNLP (MRQA) 2021

Therefore we propose multiple approaches to modelling joint probability $P(a_s, a_e)$ directly.