ViSiL: Fine-grained Spatio-Temporal Video Similarity Learning

In this paper we introduce ViSiL, a Video Similarity Learning architecture that considers fine-grained Spatio-Temporal relations between pairs of videos -- such relations are typically lost in previous video retrieval approaches that embed the whole frame or even the whole video into a vector descriptor before the similarity estimation. By contrast, our Convolutional Neural Network (CNN)-based approach is trained to calculate video-to-video similarity from refined frame-to-frame similarity matrices, so as to consider both intra- and inter-frame relations. In the proposed method, pairwise frame similarity is estimated by applying Tensor Dot (TD) followed by Chamfer Similarity (CS) on regional CNN frame features - this avoids feature aggregation before the similarity calculation between frames. Subsequently, the similarity matrix between all video frames is fed to a four-layer CNN, and then summarized using Chamfer Similarity (CS) into a video-to-video similarity score -- this avoids feature aggregation before the similarity calculation between videos and captures the temporal similarity patterns between matching frame sequences. We train the proposed network using a triplet loss scheme and evaluate it on five public benchmark datasets on four different video retrieval problems where we demonstrate large improvements in comparison to the state of the art. The implementation of ViSiL is publicly available.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Retrieval FIVR-200K ViSiLv (pt) mAP (ISVR) 0.723 # 5
mAP (DSVR) 0.899 # 5
mAP (CSVR) 0.854 # 4
Video Retrieval FIVR-200K ViSiLsym mAP (ISVR) 0.654 # 10
mAP (DSVR) 0.833 # 10
mAP (CSVR) 0.792 # 9
Video Retrieval FIVR-200K ViSiLf mAP (ISVR) 0.660 # 9
mAP (DSVR) 0.843 # 9
mAP (CSVR) 0.797 # 8
Video Retrieval FIVR-200K ViSiLv (tf) mAP (ISVR) 0.702 # 7
mAP (DSVR) 0.892 # 7
mAP (CSVR) 0.841 # 5

Methods