Learning Deep Structure-Preserving Image-Text Embeddings

CVPR 2016  ·  Liwei Wang, Yin Li, Svetlana Lazebnik ·

This paper proposes a method for learning joint embeddings of images and text using a two-branch neural network with multiple layers of linear projections followed by nonlinearities. The network is trained using a large margin objective that combines cross-view ranking constraints with within-view neighborhood structure preservation constraints inspired by metric learning literature. Extensive experiments show that our approach gains significant improvements in accuracy for image-to-text and text-to-image retrieval. Our method achieves new state-of-the-art results on the Flickr30K and MSCOCO image-sentence datasets and shows promise on the new task of phrase localization on the Flickr30K Entities dataset.

PDF Abstract CVPR 2016 PDF CVPR 2016 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Image Retrieval Flickr30K 1K test SPE R@1 29.7 # 15
R@10 72.1 # 14
R@5 60.1 # 13

Methods


No methods listed for this paper. Add relevant methods here