Temporal Action Localization in Untrimmed Videos via Multi-stage CNNs

CVPR 2016  Â·  Zheng Shou, Dongang Wang, Shih-Fu Chang ·

We address temporal action localization in untrimmed long videos. This is important because videos in real applications are usually unconstrained and contain multiple action instances plus video content of background scenes or other activities... To address this challenging issue, we exploit the effectiveness of deep networks in temporal action localization via three segment-based 3D ConvNets: (1) a proposal network identifies candidate segments in a long video that may contain actions; (2) a classification network learns one-vs-all action classification model to serve as initialization for the localization network; and (3) a localization network fine-tunes on the learned classification network to localize each action instance. We propose a novel loss function for the localization network to explicitly consider temporal overlap and therefore achieve high temporal localization accuracy. Only the proposal network and the localization network are used during prediction. On two large-scale benchmarks, our approach achieves significantly superior performances compared with other state-of-the-art systems: mAP increases from 1.7% to 7.4% on MEXaction2 and increases from 15.0% to 19.0% on THUMOS 2014, when the overlap threshold for evaluation is set to 0.5. read more

PDF Abstract CVPR 2016 PDF CVPR 2016 Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Temporal Action Localization MEXaction2 S-CNN mAP 7.4 # 1
Action Recognition THUMOS’14 Shou et. al. mAP@0.1 47.7 # 7
mAP@0.2 43.5 # 7
mAP@0.3 36.3 # 10
mAP@0.4 28.7 # 11
mAP@0.5 19.0 # 11
Temporal Action Localization THUMOS’14 S-CNN mAP IOU@0.5 19 # 14
mAP IOU@0.1 47.7 # 9
mAP IOU@0.2 43.5 # 8
mAP IOU@0.3 36.3 # 12
mAP IOU@0.4 28.7 # 11


No methods listed for this paper. Add relevant methods here