Faithful Multimodal Explanation for Visual Question Answering

WS 2019  ·  Jialin Wu, Raymond J. Mooney ·

AI systems' ability to explain their reasoning is critical to their utility and trustworthiness. Deep neural networks have enabled significant progress on many challenging problems such as visual question answering (VQA). However, most of them are opaque black boxes with limited explanatory capability. This paper presents a novel approach to developing a high-performing VQA system that can elucidate its answers with integrated textual and visual explanations that faithfully reflect important aspects of its underlying reasoning while capturing the style of comprehensible human explanations. Extensive experimental evaluation demonstrates the advantages of this approach compared to competing methods with both automatic evaluation metrics and human evaluation metrics.

PDF Abstract WS 2019 PDF WS 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Explanatory Visual Question Answering GQA-REX EXP BLEU-4 42.45 # 5
METEOR 34.46 # 5
ROUGE-L 73.51 # 5
CIDEr 357.10 # 5
SPICE 40.35 # 5
Grounding 33.52 # 4
GQA-val 65.17 # 5
GQA-test 56.92 # 5

Methods


No methods listed for this paper. Add relevant methods here