What You See is What You Read? Improving Text-Image Alignment Evaluation
Automatically determining whether a text and a corresponding image are semantically aligned is a significant challenge for vision-language models, with applications in generative text-to-image and image-to-text tasks. In this work, we study methods for automatic text-image alignment evaluation. We first introduce SeeTRUE: a comprehensive evaluation set, spanning multiple datasets from both text-to-image and image-to-text generation tasks, with human judgements for whether a given text-image pair is semantically aligned. We then describe two automatic methods to determine alignment: the first involving a pipeline based on question generation and visual question answering models, and the second employing an end-to-end classification approach by finetuning multimodal pretrained models. Both methods surpass prior approaches in various text-image alignment tasks, with significant improvements in challenging cases that involve complex composition or unnatural images. Finally, we demonstrate how our approaches can localize specific misalignments between an image and a given text, and how they can be used to automatically re-rank candidates in text-to-image generation.
PDF Abstract NeurIPS 2023 PDF NeurIPS 2023 AbstractTask | Dataset | Model | Metric Name | Metric Value | Global Rank | Benchmark |
---|---|---|---|---|---|---|
Visual Reasoning | Winoground | COCA ViT-L14 (f.t on COCO) | Text Score | 28.25 | # 61 | |
Image Score | 11.50 | # 70 | ||||
Group Score | 8.25 | # 62 | ||||
Visual Reasoning | Winoground | OFA large (ft SNLI-VE) | Text Score | 27.70 | # 65 | |
Image Score | 14.30 | # 56 | ||||
Group Score | 9.00 | # 59 | ||||
Visual Reasoning | Winoground | CLIP RN50x64 | Text Score | 26.50 | # 67 | |
Image Score | 13.75 | # 61 | ||||
Group Score | 10.25 | # 52 | ||||
Visual Reasoning | Winoground | TIFA | Text Score | 19.00 | # 86 | |
Image Score | 12.50 | # 67 | ||||
Group Score | 11.30 | # 47 | ||||
Visual Reasoning | Winoground | BLIP2 (ft COCO) | Text Score | 44.00 | # 12 | |
Image Score | 26.00 | # 16 | ||||
Group Score | 23.50 | # 9 | ||||
Visual Reasoning | Winoground | PaLI (ft SNLI-VE) | Text Score | 45.00 | # 10 | |
Image Score | 41.50 | # 5 | ||||
Group Score | 28.70 | # 7 | ||||
Visual Reasoning | Winoground | PaLI (ft SNLI-VE + Synthetic Data) | Text Score | 46.5 | # 7 | |
Image Score | 38 | # 6 | ||||
Group Score | 28.75 | # 6 | ||||
Visual Reasoning | Winoground | VQ2 | Text Score | 47 | # 5 | |
Image Score | 42.2 | # 4 | ||||
Group Score | 30.5 | # 5 |