Multi-hop Question Answering

42 papers with code • 1 benchmarks • 3 datasets

This task has no description! Would you like to contribute one?


Use these libraries to find Multi-hop Question Answering models and implementations

Most implemented papers

Repurposing Entailment for Multi-Hop Question Answering Tasks

StonyBrookNLP/multee NAACL 2019

We introduce Multee, a general architecture that can effectively use entailment models for multi-hop QA tasks.

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering

michiyasunaga/qagnn NAACL 2021

The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.

HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering

hotpotqa/hotpot EMNLP 2018

Existing question answering (QA) datasets fail to train QA systems to perform complex reasoning and provide explanations for answers.

Cognitive Graph for Multi-Hop Reading Comprehension at Scale


We propose a new CogQA framework for multi-hop question answering in web-scale documents.

Multi-hop Question Answering via Reasoning Chains

soujanyarbhat/aNswER_multirc 7 Oct 2019

Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.

Commonsense for Generative Multi-Hop Question Answering Tasks

yicheng-w/CommonSenseMultiHopQA EMNLP 2018

We instead focus on a more challenging multi-hop generative task (NarrativeQA), which requires the model to reason, gather, and synthesize disjoint pieces of information within the context to generate an answer.

Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings

malllabiisc/EmbedKGQA ACL 2020

In a separate line of research, KG embedding methods have been proposed to reduce KG sparsity by performing missing link prediction.

Analyzing the Effectiveness of the Underlying Reasoning Tasks in Multi-hop Question Answering

alab-nii/multi-hop-analysis 12 Feb 2023

To explain the predicted answers and evaluate the reasoning abilities of models, several studies have utilized underlying reasoning (UR) tasks in multi-hop question answering (QA) datasets.

Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA

jiangycTarheel/Adversarial-MultiHopQA ACL 2019

After adversarial training, the baseline's performance improves but is still limited on the adversarial evaluation.