Learning Joint Embedding with Multimodal Cues for Cross-Modal Video-Text Retrieval

ICMR 2018 Niluthpol Chowdhury MithunJuncheng LiFlorian MetzeAmit K. Roy-Chowdhury

Constructing a joint representation invariant across different modalities (e.g., video, language) is of significant importance in many multimedia applications. While there are a number of recent successes in developing effective image-text retrieval methods by learning joint representations, the video-text retrieval task, in contrast, has not been explored to its fullest extent... (read more)

PDF Abstract

Results from the Paper


TASK DATASET MODEL METRIC NAME METRIC VALUE GLOBAL RANK LEADERBOARD
Video Retrieval MSR-VTT JEMC text-to-video [email protected] 7.0 # 4
text-to-video [email protected] 20.9 # 2
text-to-video [email protected] 29.7 # 4
text-to-video Mean Rank 213.8 # 2
text-to-video Median Rank 29.7 # 4
video-to-text [email protected] 12.5 # 2
video-to-text [email protected] 32.1 # 3
video-to-text [email protected] 42.2 # 2
video-to-text Median Rank 16 # 2
video-to-text Mean Rank 134 # 2

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet