IMRAM: Iterative Matching with Recurrent Attention Memory for Cross-Modal Image-Text Retrieval

Enabling bi-directional retrieval of images and texts is important for understanding the correspondence between vision and language. Existing methods leverage the attention mechanism to explore such correspondence in a fine-grained manner. However, most of them consider all semantics equally and thus align them uniformly, regardless of their diverse complexities. In fact, semantics are diverse (i.e. involving different kinds of semantic concepts), and humans usually follow a latent structure to combine them into understandable languages. It may be difficult to optimally capture such sophisticated correspondences in existing methods. In this paper, to address such a deficiency, we propose an Iterative Matching with Recurrent Attention Memory (IMRAM) method, in which correspondences between images and texts are captured with multiple steps of alignments. Specifically, we introduce an iterative matching scheme to explore such fine-grained correspondence progressively. A memory distillation unit is used to refine alignment knowledge from early steps to later ones. Experiment results on three benchmark datasets, i.e. Flickr8K, Flickr30K, and MS COCO, show that our IMRAM achieves state-of-the-art performance, well demonstrating its effectiveness. Experiments on a practical business advertisement dataset, named \Ads{}, further validates the applicability of our method in practical scenarios.

PDF Abstract CVPR 2020 PDF CVPR 2020 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Cross-Modal Retrieval COCO 2014 IMRAM Image-to-text R@1 53.7 # 25
Image-to-text R@10 91.0 # 22
Image-to-text R@5 83.2 # 23
Text-to-image R@1 39.7 # 29
Text-to-image R@10 79.8 # 27
Text-to-image R@5 69.1 # 28
Cross-Modal Retrieval Flickr30k IMRAM Image-to-text R@1 74.1 # 18
Image-to-text R@10 96.6 # 17
Image-to-text R@5 93.0 # 17
Text-to-image R@1 53.9 # 19
Text-to-image R@10 87.2 # 18
Text-to-image R@5 79.4 # 18

Methods


No methods listed for this paper. Add relevant methods here