A Joint Sequence Fusion Model for Video Question Answering and Retrieval

ECCV 2018  ·  Youngjae Yu, Jongseok Kim, Gunhee Kim ·

We present an approach named JSFusion (Joint Sequence Fusion) that can measure semantic similarity between any pairs of multimodal sequence data (e.g. a video clip and a language sentence). Our multimodal matching network consists of two key components. First, the Joint Semantic Tensor composes a dense pairwise representation of two sequence data into a 3D tensor. Then, the Convolutional Hierarchical Decoder computes their similarity score by discovering hidden hierarchical matches between the two sequence modalities. Both modules leverage hierarchical attention mechanisms that learn to promote well-matched representation patterns while prune out misaligned ones in a bottom-up manner. Although the JSFusion is a universal model to be applicable to any multimodal sequence data, this work focuses on video-language tasks including multimodal retrieval and video QA. We evaluate the JSFusion model in three retrieval and VQA tasks in LSMDC, for which our model achieves the best performance reported so far. We also perform multiple-choice and movie retrieval tasks for the MSR-VTT dataset, on which our approach outperforms many state-of-the-art methods.

PDF Abstract ECCV 2018 PDF ECCV 2018 Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Retrieval LSMDC JSFusion text-to-video R@1 9.1 # 22
text-to-video R@5 21.2 # 19
text-to-video R@10 34.1 # 16
text-to-video Median Rank 36 # 14
Video Retrieval MSR-VTT JSFusion text-to-video R@1 10.2 # 19
text-to-video R@10 43.2 # 16
text-to-video Median Rank 13 # 12
video-to-text R@5 31.2 # 10
Video Retrieval MSR-VTT-1kA JSFusion text-to-video R@1 10.2 # 39
text-to-video R@5 31.2 # 37
text-to-video R@10 43.2 # 38
text-to-video Median Rank 13 # 29


No methods listed for this paper. Add relevant methods here