Visual Spatial Reasoning

30 Apr 2022  Â·  Fangyu Liu, Guy Emerson, Nigel Collier ·

Spatial relations are a basic part of human cognition. However, they are expressed in natural language in a variety of ways, and previous work has suggested that current vision-and-language models (VLMs) struggle to capture relational information. In this paper, we present Visual Spatial Reasoning (VSR), a dataset containing more than 10k natural text-image pairs with 66 types of spatial relations in English (such as: under, in front of, and facing). While using a seemingly simple annotation format, we show how the dataset includes challenging linguistic phenomena, such as varying reference frames. We demonstrate a large gap between human and model performance: the human ceiling is above 95%, while state-of-the-art models only achieve around 70%. We observe that VLMs' by-relation performances have little correlation with the number of training examples and the tested models are in general incapable of recognising relations concerning the orientations of objects.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Reasoning VSR Cobra accuracy 63.6 # 4
Visual Reasoning VSR CLIP (finetuned) accuracy 65.1 # 3
Visual Reasoning VSR CLIP (frozen) accuracy 56.0 # 5
Visual Reasoning VSR ViLT accuracy 69.3 # 2
Visual Reasoning VSR LXMERT accuracy 70.1 # 1
Visual Reasoning VSR VisualBERT accuracy 55.2 # 6

Methods


CLIP • LXMERT • ViLT • VisualBERT