42 papers with code • 1 benchmarks • 1 datasets
LibrariesUse these libraries to find TriviaQA models and implementations
We introduce an approach for open-domain question answering (QA) that retrieves and reads a passage graph, where vertices are passages of text and edges represent relationships that are derived from an external knowledge base or co-occurrence in the same article.
Generative models for open domain question answering have proven to be competitive, without resorting to external knowledge.
We present TriviaQA, a challenging reading comprehension dataset containing over 650K question-answer-evidence triples.
We also explore two approaches for end-to-end supervised training of the reader and retriever components in OpenQA models.
We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input.
We propose two methods, namely, strength-based re-ranking and coverage-based re-ranking, to make use of the aggregated evidence from different passages to better determine the answer.
Neural network models recently proposed for question answering (QA) primarily focus on capturing the passage-question relation.