SEA: Sentence Encoder Assembly for Video Retrieval by Textual Queries

24 Nov 2020  ·  Xirong Li, Fangming Zhou, Chaoxi Xu, Jiaqi Ji, Gang Yang ·

Retrieving unlabeled videos by textual queries, known as Ad-hoc Video Search (AVS), is a core theme in multimedia data management and retrieval. The success of AVS counts on cross-modal representation learning that encodes both query sentences and videos into common spaces for semantic similarity computation. Inspired by the initial success of previously few works in combining multiple sentence encoders, this paper takes a step forward by developing a new and general method for effectively exploiting diverse sentence encoders. The novelty of the proposed method, which we term Sentence Encoder Assembly (SEA), is two-fold. First, different from prior art that use only a single common space, SEA supports text-video matching in multiple encoder-specific common spaces. Such a property prevents the matching from being dominated by a specific encoder that produces an encoding vector much longer than other encoders. Second, in order to explore complementarities among the individual common spaces, we propose multi-space multi-loss learning. As extensive experiments on four benchmarks (MSR-VTT, TRECVID AVS 2016-2019, TGIF and MSVD) show, SEA surpasses the state-of-the-art. In addition, SEA is extremely ease to implement. All this makes SEA an appealing solution for AVS and promising for continuously advancing the task by harvesting new sentence encoders.

PDF Abstract

Results from the Paper


Ranked #2 on Ad-hoc video search on TRECVID-AVS16 (IACC.3) (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Ad-hoc video search TRECVID-AVS16 (IACC.3) SEA infAP 0.164 # 2
Ad-hoc video search TRECVID-AVS17 (IACC.3) SEA infAP 0.234 # 2
Ad-hoc video search TRECVID-AVS18 (IACC.3) SEA infAP 0.128 # 2
Ad-hoc video search TRECVID-AVS19 (V3C1) SEA infAP 0.167 # 2

Methods


No methods listed for this paper. Add relevant methods here