Multi-hop Question Answering
42 papers with code • 1 benchmarks • 3 datasets
These leaderboards are used to track progress in Multi-hop Question Answering
LibrariesUse these libraries to find Multi-hop Question Answering models and implementations
Most implemented papers
Repurposing Entailment for Multi-Hop Question Answering Tasks
We introduce Multee, a general architecture that can effectively use entailment models for multi-hop QA tasks.
QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
Existing question answering (QA) datasets fail to train QA systems to perform complex reasoning and provide explanations for answers.
Cognitive Graph for Multi-Hop Reading Comprehension at Scale
We propose a new CogQA framework for multi-hop question answering in web-scale documents.
Multi-hop Question Answering via Reasoning Chains
Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.
Commonsense for Generative Multi-Hop Question Answering Tasks
We instead focus on a more challenging multi-hop generative task (NarrativeQA), which requires the model to reason, gather, and synthesize disjoint pieces of information within the context to generate an answer.
HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data
3) a hybrid model that combines heterogeneous information to find the answer.
Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings
In a separate line of research, KG embedding methods have been proposed to reduce KG sparsity by performing missing link prediction.
Analyzing the Effectiveness of the Underlying Reasoning Tasks in Multi-hop Question Answering
To explain the predicted answers and evaluate the reasoning abilities of models, several studies have utilized underlying reasoning (UR) tasks in multi-hop question answering (QA) datasets.
Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA
After adversarial training, the baseline's performance improves but is still limited on the adversarial evaluation.