Multi-hop Question Answering

31 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Repurposing Entailment for Multi-Hop Question Answering Tasks

StonyBrookNLP/multee NAACL 2019

We introduce Multee, a general architecture that can effectively use entailment models for multi-hop QA tasks.

QA-GNN: Reasoning with Language Models and Knowledge Graphs for Question Answering

michiyasunaga/qagnn NAACL 2021

The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.

Cognitive Graph for Multi-Hop Reading Comprehension at Scale

THUDM/CogQA ACL 2019

We propose a new CogQA framework for multi-hop question answering in web-scale documents.

Multi-hop Question Answering via Reasoning Chains

soujanyarbhat/aNswER_multirc 7 Oct 2019

Our analysis shows the properties of chains that are crucial for high performance: in particular, modeling extraction sequentially is important, as is dealing with each candidate sentence in a context-aware way.

Commonsense for Generative Multi-Hop Question Answering Tasks

yicheng-w/CommonSenseMultiHopQA EMNLP 2018

We instead focus on a more challenging multi-hop generative task (NarrativeQA), which requires the model to reason, gather, and synthesize disjoint pieces of information within the context to generate an answer.

HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering

hotpotqa/hotpot EMNLP 2018

Existing question answering (QA) datasets fail to train QA systems to perform complex reasoning and provide explanations for answers.

Improving Multi-hop Question Answering over Knowledge Graphs using Knowledge Base Embeddings

malllabiisc/EmbedKGQA ACL 2020

In a separate line of research, KG embedding methods have been proposed to reduce KG sparsity by performing missing link prediction.

Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA

jiangycTarheel/Adversarial-MultiHopQA ACL 2019

After adversarial training, the baseline's performance improves but is still limited on the adversarial evaluation.

What's Missing: A Knowledge Gap Guided Approach for Multi-hop Question Answering

allenai/missing-fact IJCNLP 2019

We propose jointly training a model to simultaneously fill this knowledge gap and compose it with the provided partial knowledge.