TASED-Net: Temporally-Aggregating Spatial Encoder-Decoder Network for Video Saliency Detection

ICCV 2019  ·  Kyle Min, Jason J. Corso ·

TASED-Net is a 3D fully-convolutional network architecture for video saliency detection. It consists of two building blocks: first, the encoder network extracts low-resolution spatiotemporal features from an input clip of several consecutive frames, and then the following prediction network decodes the encoded features spatially while aggregating all the temporal information. As a result, a single prediction map is produced from an input clip of multiple frames. Frame-wise saliency maps can be predicted by applying TASED-Net in a sliding-window fashion to a video. The proposed approach assumes that the saliency map of any frame can be predicted by considering a limited number of past frames. The results of our extensive experiments on video saliency detection validate this assumption and demonstrate that our fully-convolutional model with temporal aggregation method is effective. TASED-Net significantly outperforms previous state-of-the-art approaches on all three major large-scale datasets of video saliency detection: DHF1K, Hollywood2, and UCFSports. After analyzing the results qualitatively, we observe that our model is especially better at attending to salient moving objects.

PDF Abstract ICCV 2019 PDF ICCV 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Video Saliency Detection DHF1K TASED-Net NSS 2.667 # 3
Video Saliency Detection MSU Video Saliency Prediction TASED-Net SIM 0.610 # 3
CC 0.710 # 2
NSS 1.96 # 3
AUC-J 0.852 # 4
KLDiv 0.538 # 4
FPS 1.85 # 12

Methods


No methods listed for this paper. Add relevant methods here