Multi-attention Networks for Temporal Localization of Video-level Labels

15 Nov 2019  ·  Lijun Zhang, Srinath Nizampatnam, Ahana Gangopadhyay, Marcos V. Conde ·

Temporal localization remains an important challenge in video understanding. In this work, we present our solution to the 3rd YouTube-8M Video Understanding Challenge organized by Google Research. Participants were required to build a segment-level classifier using a large-scale training data set with noisy video-level labels and a relatively small-scale validation data set with accurate segment-level labels. We formulated the problem as a multiple instance multi-label learning and developed an attention-based mechanism to selectively emphasize the important frames by attention weights. The model performance is further improved by constructing multiple sets of attention networks. We further fine-tuned the model using the segment-level data set. Our final model consists of an ensemble of attention/multi-attention networks, deep bag of frames models, recurrent neural networks and convolutional neural networks. It ranked 13th on the private leader board and stands out for its efficient usage of resources.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods