Revisiting Video Saliency: A Large-scale Benchmark and a New Model

In this work, we contribute to video saliency research in two ways. First, we introduce a new benchmark for predicting human eye movements during dynamic scene free-viewing, which is long-time urged in this field. Our dataset, named DHF1K (Dynamic Human Fixation), consists of 1K high-quality, elaborately selected video sequences spanning a large range of scenes, motions, object types and background complexity. Existing video saliency datasets lack variety and generality of common dynamic scenes and fall short in covering challenging situations in unconstrained environments. In contrast, DHF1K makes a significant leap in terms of scalability, diversity and difficulty, and is expected to boost video saliency modeling. Second, we propose a novel video saliency model that augments the CNN-LSTM network architecture with an attention mechanism to enable fast, end-to-end saliency learning. The attention mechanism explicitly encodes static saliency information, thus allowing LSTM to focus on learning more flexible temporal saliency representation across successive frames. Such a design fully leverages existing large-scale static fixation datasets, avoids overfitting, and significantly improves training efficiency and testing performance. We thoroughly examine the performance of our model, with respect to state-of-the-art saliency models, on three large-scale datasets (i.e., DHF1K, Hollywood2, UCF sports). Experimental results over more than 1.2K testing videos containing 400K frames demonstrate that our model outperforms other competitors.

PDF Abstract CVPR 2018 PDF CVPR 2018 Abstract

Datasets


Introduced in the Paper:

DHF1K

Used in the Paper:

SALICON MSU Video Saliency Prediction
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Saliency Detection MSU Video Saliency Prediction ACLNet SIM 0.586 # 8
CC 0.651 # 8
NSS 1.71 # 8
AUC-J 0.839 # 8
KLDiv 0.593 # 8
FPS 4.18 # 5

Methods