Beyond Question-Based Biases: Assessing Multimodal Shortcut Learning in Visual Question Answering

ICCV 2021  ยท  Corentin Dancette, Remi Cadene, Damien Teney, Matthieu Cord ยท

We introduce an evaluation methodology for visual question answering (VQA) to better diagnose cases of shortcut learning. These cases happen when a model exploits spurious statistical regularities to produce correct answers but does not actually deploy the desired behavior. There is a need to identify possible shortcuts in a dataset and assess their use before deploying a model in the real world. The research community in VQA has focused exclusively on question-based shortcuts, where a model might, for example, answer "What is the color of the sky" with "blue" by relying mostly on the question-conditional training prior and give little weight to visual evidence. We go a step further and consider multimodal shortcuts that involve both questions and images. We first identify potential shortcuts in the popular VQA v2 training set by mining trivial predictive rules such as co-occurrences of words and visual elements. We then introduce VQA-CounterExamples (VQA-CE), an evaluation protocol based on our subset of CounterExamples i.e. image-question-answer triplets where our rules lead to incorrect answers. We use this new evaluation in a large-scale study of existing approaches for VQA. We demonstrate that even state-of-the-art models perform poorly and that existing techniques to reduce biases are largely ineffective in this context. Our findings suggest that past work on question-based biases in VQA has only addressed one facet of a complex issue. The code for our method is available at https://github.com/cdancette/detect-shortcuts.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Question Answering (VQA) VQA-CE RandImg Accuracy (Counterexamples) 34.41 # 1
Visual Question Answering (VQA) VQA-CE LMH + CSS Accuracy (Counterexamples) 34.36 # 2
Visual Question Answering (VQA) VQA-CE LFF Accuracy (Counterexamples) 34.27 # 3
Visual Question Answering (VQA) VQA-CE LMH Accuracy (Counterexamples) 34.26 # 4
Visual Question Answering (VQA) VQA-CE ESR Accuracy (Counterexamples) 33.26 # 6
Visual Question Answering (VQA) VQA-CE LMH + RMFE Accuracy (Counterexamples) 33.14 # 7
Visual Question Answering (VQA) VQA-CE RUBi Accuracy (Counterexamples) 32.25 # 9
Visual Question Answering (VQA) VQA-CE BLOCK Accuracy (Counterexamples) 32.91 # 8
Visual Question Answering (VQA) VQA-CE UpDown Accuracy (Counterexamples) 33.91 # 5

Methods


No methods listed for this paper. Add relevant methods here