Self-Critical Reasoning for Robust Visual Question Answering

NeurIPS 2019  ·  Jialin Wu, Raymond J. Mooney ·

Visual Question Answering (VQA) deep-learning systems tend to capture superficial statistical correlations in the training data because of strong language priors and fail to generalize to test data with a significantly different question-answer (QA) distribution. To address this issue, we introduce a self-critical training objective that ensures that visual explanations of correct answers match the most influential image regions more than other competitive answer candidates. The influential regions are either determined from human visual/textual explanations or automatically from just significant words in the question and answer. We evaluate our approach on the VQA generalization task using the VQA-CP dataset, achieving a new state-of-the-art i.e., 49.5% using textual explanations and 48.5% using automatically annotated regions.

PDF Abstract NeurIPS 2019 PDF NeurIPS 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Question Answering (VQA) VQA-CP UpDn+SCR (VQA-X) Score 49.45 # 6

Methods


No methods listed for this paper. Add relevant methods here