VQA-E: Explaining, Elaborating, and Enhancing Your Answers for Visual Questions

Most existing works in visual question answering (VQA) are dedicated to improving the accuracy of predicted answers, while disregarding the explanations. We argue that the explanation for an answer is of the same or even more importance compared with the answer itself, since it makes the question and answering process more understandable and traceable. To this end, we propose a new task of VQA-E (VQA with Explanation), where the computational models are required to generate an explanation with the predicted answer. We first construct a new dataset, and then frame the VQA-E problem in a multi-task learning architecture. Our VQA-E dataset is automatically derived from the VQA v2 dataset by intelligently exploiting the available captions. We have conducted a user study to validate the quality of explanations synthesized by our method. We quantitatively show that the additional supervision from explanations can not only produce insightful textual sentences to justify the answers, but also improve the performance of answer prediction. Our model outperforms the state-of-the-art methods by a clear margin on the VQA v2 dataset.

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Explanatory Visual Question Answering GQA-REX VQAE BLEU-4 42.56 # 4
METEOR 34.51 # 4
ROUGE-L 73.59 # 4
CIDEr 358.20 # 4
SPICE 40.39 # 4
Grounding 31.29 # 5
GQA-val 65.19 # 4
GQA-test 57.24 # 4

Methods


No methods listed for this paper. Add relevant methods here