CLEVR-Ref+: Diagnosing Visual Reasoning with Referring Expressions

CVPR 2019  ·  Runtao Liu, Chenxi Liu, Yutong Bai, Alan Yuille ·

Referring object detection and referring image segmentation are important tasks that require joint understanding of visual information and natural language. Yet there has been evidence that current benchmark datasets suffer from bias, and current state-of-the-art models cannot be easily evaluated on their intermediate reasoning process. To address these issues and complement similar efforts in visual question answering, we build CLEVR-Ref+, a synthetic diagnostic dataset for referring expression comprehension. The precise locations and attributes of the objects are readily available, and the referring expressions are automatically associated with functional programs. The synthetic nature allows control over dataset bias (through sampling strategy), and the modular programs enable intermediate reasoning ground truth without human annotators. In addition to evaluating several state-of-the-art models on CLEVR-Ref+, we also propose IEP-Ref, a module network approach that significantly outperforms other models on our dataset. In particular, we present two interesting and important findings using IEP-Ref: (1) the module trained to transform feature maps into segmentation masks can be attached to any intermediate module to reveal the entire reasoning process step-by-step; (2) even if all training data has at least one object referred, IEP-Ref can correctly predict no-foreground when presented with false-premise referring expressions. To the best of our knowledge, this is the first direct and quantitative proof that neural modules behave in the way they are intended.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract


Introduced in the Paper:


Used in the Paper:

Visual Question Answering CLEVR RefCOCO
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Referring Expression Segmentation CLEVR-Ref+ IEP-Ref (700K prog.) IoU 80.6 # 1
Referring Expression Comprehension CLEVR-Ref+ MAttNet [34] Accuracy 60.9 # 4


No methods listed for this paper. Add relevant methods here