TriviaQA
44 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find TriviaQA models and implementationsMost implemented papers
Efficient and Robust Question Answering from Minimal Context over Documents
Neural models for question answering (QA) over documents have achieved significant performance improvements.
Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering
Extensive experiments on the large-scale SQuAD and TriviaQA datasets validate the effectiveness of the proposed method.
Episodic Memory Reader: Learning What to Remember for Question Answering from Streaming Data
We consider a novel question answering (QA) task where the machine needs to read from large streaming data (long documents or videos) without knowing when the questions will be given, which is difficult to solve with existing QA methods due to their lack of scalability.
Multi-step Retriever-Reader Interaction for Scalable Open-domain Question Answering
This paper introduces a new framework for open-domain question answering in which the retriever and the reader iteratively interact with each other.
Retrieve, Read, Rerank: Towards End-to-End Multi-Document Reading Comprehension
This paper considers the reading comprehension task in which multiple documents are given as input.
A Discrete Hard EM Approach for Weakly Supervised Question Answering
Many question answering (QA) tasks only provide weak supervision for how the answer should be computed.
Entities as Experts: Sparse Memory Access with Entity Supervision
We introduce a new model - Entities as Experts (EAE) - that can access distinct memories of the entities mentioned in a piece of text.
Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering
We address the problem of extractive question answering using document-level distant super-vision, pairing questions and relevant documents with answer strings.
Recurrent Chunking Mechanisms for Long-Text Machine Reading Comprehension
In this paper, we study machine reading comprehension (MRC) on long texts, where a model takes as inputs a lengthy document and a question and then extracts a text span from the document as an answer.
Generation-Augmented Retrieval for Open-domain Question Answering
We demonstrate that the generated contexts substantially enrich the semantics of the queries and GAR with sparse representations (BM25) achieves comparable or better performance than state-of-the-art dense retrieval methods such as DPR.