Discriminative Feature Learning for Unsupervised Video Summarization

24 Nov 2018  ·  Yunjae Jung, Donghyeon Cho, Dahun Kim, Sanghyun Woo, In So Kweon ·

In this paper, we address the problem of unsupervised video summarization that automatically extracts key-shots from an input video. Specifically, we tackle two critical issues based on our empirical observations: (i) Ineffective feature learning due to flat distributions of output importance scores for each frame, and (ii) training difficulty when dealing with long-length video inputs. To alleviate the first problem, we propose a simple yet effective regularization loss term called variance loss. The proposed variance loss allows a network to predict output scores for each frame with high discrepancy which enables effective feature learning and significantly improves model performance. For the second problem, we design a novel two-stream network named Chunk and Stride Network (CSNet) that utilizes local (chunk) and global (stride) temporal view on the video features. Our CSNet gives better summarization results for long-length videos compared to the existing methods. In addition, we introduce an attention mechanism to handle the dynamic information in videos. We demonstrate the effectiveness of the proposed methods by conducting extensive ablation studies and show that our final model achieves new state-of-the-art results on two benchmark datasets.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Unsupervised Video Summarization SumMe CSNet F1-score 51.3 # 3
training time (s) 568.6 # 3
Parameters (M) 100.76 # 6
Supervised Video Summarization SumMe CSNet F1-score (Canonical) 48.6 # 8
F1-score (Augmented) 48.7 # 3
Unsupervised Video Summarization TvSum CSNet F1-score 58.8 # 4
Spearman's Rho 0.034 # 3
Kendall's Tau 0.025 # 3
training time (s) 1797 # 3
Parameters (M) 100.76 # 6
Supervised Video Summarization TvSum CSNet F1-score (Canonical) 58.5 # 12
F1-score (Augmented) 57.1 # 4

Methods


No methods listed for this paper. Add relevant methods here