Multi-hop Question Answering
31 papers with code • 1 benchmarks • 2 datasets
The problem of answering questions using knowledge from pre-trained language models (LMs) and knowledge graphs (KGs) presents two challenges: given a QA context (question and answer choice), methods need to (i) identify relevant knowledge from large KGs, and (ii) perform joint reasoning over the QA context and KG.
We instead focus on a more challenging multi-hop generative task (NarrativeQA), which requires the model to reason, gather, and synthesize disjoint pieces of information within the context to generate an answer.
Existing question answering (QA) datasets fail to train QA systems to perform complex reasoning and provide explanations for answers.
In a separate line of research, KG embedding methods have been proposed to reduce KG sparsity by performing missing link prediction.
Avoiding Reasoning Shortcuts: Adversarial Evaluation, Training, and Model Development for Multi-Hop QA
After adversarial training, the baseline's performance improves but is still limited on the adversarial evaluation.
We propose jointly training a model to simultaneously fill this knowledge gap and compose it with the provided partial knowledge.