Point-Level Temporal Action Localization: Bridging Fully-supervised Proposals to Weakly-supervised Losses

15 Dec 2020  ·  Chen Ju, Peisen Zhao, Ya zhang, Yanfeng Wang, Qi Tian ·

Point-Level temporal action localization (PTAL) aims to localize actions in untrimmed videos with only one timestamp annotation for each action instance. Existing methods adopt the frame-level prediction paradigm to learn from the sparse single-frame labels. However, such a framework inevitably suffers from a large solution space. This paper attempts to explore the proposal-based prediction paradigm for point-level annotations, which has the advantage of more constrained solution space and consistent predictions among neighboring frames. The point-level annotations are first used as the keypoint supervision to train a keypoint detector. At the location prediction stage, a simple but effective mapper module, which enables back-propagation of training errors, is then introduced to bridge the fully-supervised framework with weak supervision. To our best of knowledge, this is the first work to leverage the fully-supervised paradigm for the point-level setting. Experiments on THUMOS14, BEOID, and GTEA verify the effectiveness of our proposed method both quantitatively and qualitatively, and demonstrate that our method outperforms state-of-the-art methods.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Weakly Supervised Action Localization BEOID Ju et al. mAP@0.1:0.7 34.9 # 3
mAP@0.5 20.9 # 3
Weakly Supervised Action Localization GTEA Ju et al. mAP@0.1:0.7 33.7 # 5
mAP@0.5 21.9 # 5
Weakly Supervised Action Localization THUMOS14 Ju et al. avg-mAP (0.1-0.5) 55.6 # 4
avg-mAP (0.3-0.7) 35.4 # 4
avg-mAP (0.1:0.7) 44.8 # 4
Weakly Supervised Action Localization THUMOS’14 Ju et al. mAP@0.5 35.9 # 3
Weakly Supervised Action Localization THUMOS 2014 Ju et al. mAP@0.5 35.9 # 8
mAP@0.1:0.7 44.8 # 9
mAP@0.1:0.5 55.6 # 7

Methods


No methods listed for this paper. Add relevant methods here