Extractive Question-Answering

39 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?


Most implemented papers

LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

studio-ousia/luke EMNLP 2020

In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer.

MLQA: Evaluating Cross-lingual Extractive Question Answering

facebookresearch/MLQA ACL 2020

An alternative to building large monolingual training datasets is to develop cross-lingual systems which can transfer to a target language without requiring training data in that language.

MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering

apple/ml-mkqa 30 Jul 2020

Progress in cross-lingual modeling depends on challenging, realistic, and diverse evaluation sets.

MarIA: Spanish Language Models

PlanTL-GOB-ES/lm-spanish 15 Jul 2021

This work presents MarIA, a family of Spanish language models and associated resources made available to the industry and the research community.

On the Multilingual Capabilities of Very Large-Scale English Language Models

temu-bsc/gpt3-queries LREC 2022

Generative Pre-trained Transformers (GPTs) have recently been scaled to unprecedented sizes in the history of machine learning.

Can Explanations Be Useful for Calibrating Black Box Models?

xiye17/interpcalib ACL 2022

Our approach first extracts a set of features combining human intuition about the task with model attributions generated by black box interpretation techniques, then uses a simple calibrator, in the form of a classifier, to predict whether the base model was correct or not.

COVID-19 event extraction from Twitter via extractive question answering with continuous prompts

viczong/extract_covid19_events_from_twitter 19 Mar 2023

As COVID-19 ravages the world, social media analytics could augment traditional surveys in assessing how the pandemic evolves and capturing consumer chatter that could help healthcare agencies in addressing it.

Learning Recurrent Span Representations for Extractive Question Answering

shimisalant/RaSoR 4 Nov 2016

In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network.

Gendered Pronoun Resolution using BERT and an extractive question answering formulation

rakeshchada/corefqa WS 2019

In this paper, we propose an extractive question answering (QA) formulation of pronoun resolution task that overcomes this limitation and shows much lower gender bias (0. 99) on their dataset.

Look at the First Sentence: Position Bias in Question Answering

dmis-lab/position-bias EMNLP 2020

In this study, we hypothesize that when the distribution of the answer positions is highly skewed in the training set (e. g., answers lie only in the k-th sentence of each passage), QA models predicting answers as positions can learn spurious positional cues and fail to give answers in different positions.