Deep Fragment Embeddings for Bidirectional Image Sentence Mapping

NeurIPS 2014  ·  Andrej Karpathy, Armand Joulin, Li Fei-Fei ·

We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.

PDF Abstract NeurIPS 2014 PDF NeurIPS 2014 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Referring Expression Comprehension Talk2Car OSM AP50 35.31 # 13

Methods


No methods listed for this paper. Add relevant methods here