25 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

# LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention

In this paper, we propose new pretrained contextualized representations of words and entities based on the bidirectional transformer.

7

# MLQA: Evaluating Cross-lingual Extractive Question Answering

An alternative to building large monolingual training datasets is to develop cross-lingual systems which can transfer to a target language without requiring training data in that language.

4

# MKQA: A Linguistically Diverse Benchmark for Multilingual Open Domain Question Answering

30 Jul 2020

Progress in cross-lingual modeling depends on challenging, realistic, and diverse evaluation sets.

2

# MarIA: Spanish Language Models

15 Jul 2021

This work presents MarIA, a family of Spanish language models and associated resources made available to the industry and the research community.

2

# On the Multilingual Capabilities of Very Large-Scale English Language Models

Generative Pre-trained Transformers (GPTs) have recently been scaled to unprecedented sizes in the history of machine learning.

2

# Learning Recurrent Span Representations for Extractive Question Answering

4 Nov 2016

In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network.

1

# Gendered Pronoun Resolution using BERT and an extractive question answering formulation

In this paper, we propose an extractive question answering (QA) formulation of pronoun resolution task that overcomes this limitation and shows much lower gender bias (0. 99) on their dataset.

1

# Look at the First Sentence: Position Bias in Question Answering

In this study, we hypothesize that when the distribution of the answer positions is highly skewed in the training set (e. g., answers lie only in the k-th sentence of each passage), QA models predicting answers as positions can learn spurious positional cues and fail to give answers in different positions.

1

# Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering

We address the problem of extractive question answering using document-level distant super-vision, pairing questions and relevant documents with answer strings.

1

# Rethinking the Objectives of Extractive Question Answering

Therefore we propose multiple approaches to modelling joint probability $P(a_s, a_e)$ directly.

1