Broaden the Vision: Geo-Diverse Visual Commonsense Reasoning

Commonsense is defined as the knowledge that is shared by everyone. However, certain types of commonsense knowledge are correlated with culture and geographic locations and they are only shared locally. For example, the scenarios of wedding ceremonies vary across regions due to different customs influenced by historical and religious factors. Such regional characteristics, however, are generally omitted in prior work. In this paper, we construct a Geo-Diverse Visual Commonsense Reasoning dataset (GD-VCR) to test vision-and-language models' ability to understand cultural and geo-location-specific commonsense. In particular, we study two state-of-the-art Vision-and-Language models, VisualBERT and ViLBERT trained on VCR, a standard multimodal commonsense benchmark with images primarily from Western regions. We then evaluate how well the trained models can generalize to answering the questions in GD-VCR. We find that the performance of both models for non-Western regions including East Asia, South Asia, and Africa is significantly lower than that for Western region. We analyze the reasons behind the performance disparity and find that the performance gap is larger on QA pairs that: 1) are concerned with culture-related scenarios, e.g., weddings, religious activities, and festivals; 2) require high-level geo-diverse commonsense reasoning rather than low-order perception and recognition. Dataset and code are released at https://github.com/WadeYin9712/GD-VCR.

PDF Abstract EMNLP 2021 PDF EMNLP 2021 Abstract

Datasets


Introduced in the Paper:

GD-VCR

Used in the Paper:

VCR
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Commonsense Reasoning GD-VCR ViLBERT Accuracy 59.99 # 2
Gap (West) -7.28 # 1
Visual Commonsense Reasoning GD-VCR VisualBERT Accuracy 53.95 # 3
Gap (West) -10.42 # 1
Visual Commonsense Reasoning GD-VCR Text-only BERT Accuracy 35.33 # 4
Visual Commonsense Reasoning GD-VCR Human Accuracy 88.84 # 1

Methods