TriviaQA
51 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find TriviaQA models and implementationsMost implemented papers
Longformer: The Long-Document Transformer
To address this limitation, we introduce the Longformer with an attention mechanism that scales linearly with sequence length, making it easy to process documents of thousands of tokens or longer.
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge.
Knowledge Guided Text Retrieval and Reading for Open Domain Question Answering
We introduce an approach for open-domain question answering (QA) that retrieves and reads a passage graph, where vertices are passages of text and edges represent relationships that are derived from an external knowledge base or co-occurrence in the same article.
Relevance-guided Supervision for OpenQA with ColBERT
In much recent work, the retriever is a learned component that uses coarse-grained vector representations of questions and passages.
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples.
Scaling Language Models: Methods, Analysis & Insights from Training Gopher
Language modelling provides a step towards intelligent communication systems by harnessing large repositories of written human knowledge to better predict and understand the world.
End-to-End Training of Neural Retrievers for Open-Domain Question Answering
We also explore two approaches for end-to-end supervised training of the reader and retriever components in OpenQA models.
Simple and Effective Multi-Paragraph Reading Comprehension
We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input.
Evidence Aggregation for Answer Re-Ranking in Open-Domain Question Answering
We propose two methods, namely, strength-based re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer.
A Question-Focused Multi-Factor Attention Network for Question Answering
Neural network models recently proposed for question answering (QA) primarily focus on capturing the passage-question relation.