Multilevel semantic and adaptive actionness learning for weakly supervised temporal action localization

Neural Networks 2024  ·  Zhilin Li, Zilei Wang, Cerui Dong ·

Weakly supervised temporal action localization aims to identify and localize action instances in untrimmed videos with only video-level labels. Typically, most methods are based on a multiple instance learning framework that uses a top-K strategy to select salient segments to represent the entire video. Therefore fine-grained video information cannot be learned, resulting in poor action classification and localization performance. In this paper, we propose a Multilevel Semantic and Adaptive Actionness Learning Network (SAL), which is mainly composed of multilevel semantic learning (MSL) branch and adaptive actionness learning (AAL) branch. The MSL branch introduces second-order video semantics, which can capture fine-grained information in videos and improve video-level classification performance. Furthermore, we propagate second-order semantics to action segments to enhance the difference between different actions. The AAL branch uses pseudo labels to learn class-agnostic action information. It introduces a video segments mix-up strategy to enhance foreground generalization ability and adds an adaptive actionness mask to balance the quality and quantity of pseudo labels, thereby improving the stability of training. Extensive experiments show that SAL achieves state-of-the-art results on three benchmarks. Code: https://github.com/lizhilin-ustc/SAL

PDF
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Weakly Supervised Action Localization ActivityNet-1.2 SAL mAP@0.5 48.5 # 3
Mean mAP 30.8 # 1
Weakly Supervised Action Localization ActivityNet-1.3 SAL mAP@0.5 44.5 # 2
mAP@0.5:0.95 28.8 # 1
Weakly Supervised Action Localization THUMOS 2014 SAL mAP@0.5 41.8 # 4
mAP@0.1:0.7 50.6 # 3
mAP@0.1:0.5 61.5 # 3

Methods


No methods listed for this paper. Add relevant methods here