Fast Video Shot Transition Localization with Deep Structured Models

13 Aug 2018  ·  Shitao Tang, Litong Feng, Zhangkui Kuang, Yimin Chen, Wei zhang ·

Detection of video shot transition is a crucial pre-processing step in video analysis. Previous studies are restricted on detecting sudden content changes between frames through similarity measurement and multi-scale operations are widely utilized to deal with transitions of various lengths. However, localization of gradual transitions are still under-explored due to the high visual similarity between adjacent frames. Cut shot transitions are abrupt semantic breaks while gradual shot transitions contain low-level spatial-temporal patterns caused by video effects in addition to the gradual semantic breaks, e.g. dissolve. In order to address the problem, we propose a structured network which is able to detect these two shot transitions using targeted models separately. Considering speed performance trade-offs, we design a smart framework. With one TITAN GPU, the proposed method can achieve a 30\(\times\) real-time speed. Experiments on public TRECVID07 and RAI databases show that our method outperforms the state-of-the-art methods. In order to train a high-performance shot transition detector, we contribute a new database ClipShots, which contains 128636 cut transitions and 38120 gradual transitions from 4039 online videos. ClipShots intentionally collect short videos for more hard cases caused by hand-held camera vibrations, large object motions, and occlusion.

PDF Abstract

Datasets


Introduced in the Paper:

ClipShots

Used in the Paper:

ssd

Results from the Paper


Ranked #3 on Camera shot boundary detection on ClipShots (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Camera shot boundary detection ClipShots DSM Cut transition detector F1 score 76.1 # 3

Methods