TriviaQA
43 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find TriviaQA models and implementationsMost implemented papers
RECONSIDER: Re-Ranking using Span-Focused Cross-Attention for Open Domain Question Answering
State-of-the-art Machine Reading Comprehension (MRC) models for Open-domain Question Answering (QA) are typically trained for span selection using distantly supervised positive examples and heuristically retrieved negative examples.
Answering Ambiguous Questions through Generative Evidence Fusion and Round-Trip Prediction
When multiple plausible answers are found, the system should rewrite the question for each answer to resolve the ambiguity.
A Memory Efficient Baseline for Open Domain Question Answering
Recently, retrieval systems based on dense representations have led to important improvements in open-domain question answering, and related tasks.
Rider: Reader-Guided Passage Reranking for Open-Domain Question Answering
Current open-domain question answering systems often follow a Retriever-Reader architecture, where the retriever first retrieves relevant passages and the reader then reads the retrieved passages to form an answer.
Efficient Passage Retrieval with Hashing for Open-domain Question Answering
Most state-of-the-art open-domain question answering systems use a neural retrieval model to encode passages into continuous vectors and extract them from a knowledge source.
Challenges in Generalization in Open Domain Question Answering
Recent work on Open Domain Question Answering has shown that there is a large discrepancy in model performance between novel test questions and those that largely overlap with training questions.
R2-D2: A Modular Baseline for Open-Domain Question Answering
This work presents a novel four-stage open-domain QA pipeline R2-D2 (Rank twice, reaD twice).
RoR: Read-over-Read for Long Document Machine Reading Comprehension
To address this problem, we propose RoR, a read-over-read method, which expands the reading field from chunk to document.
What's in a Name? Answer Equivalence For Open-Domain Question Answering
We incorporate answers for two settings: evaluation with additional answers and model training with equivalent answers.
Adversarial Retriever-Ranker for dense text retrieval
To address these challenges, we present Adversarial Retriever-Ranker (AR2), which consists of a dual-encoder retriever plus a cross-encoder ranker.