Spatial-Temporal Pyramid Graph Reasoning for Action Recognition

Spatial-temporal relation reasoning is a significant yet challenging problem for video action recognition. Previous works typically apply local operations like 2D or 3D CNNs to conduct space-time interactions in video sequences, or simply capture space-time long-range relations of a single fixed scale. However, this is inadequate for obtaining a comprehensive action representation. Besides, most models treat all input frames equally for the final classification, without selecting key frames and motion-sensitive regions. This introduces irrelevant video content and hurts the performance of models. In this paper, we propose a generic Spatial-Temporal Pyramid Graph Network (STPG-Net) to adaptively capture long-range spatial-temporal relations in video sequences at multiple scales. Specifically, we design a temporal attention (TA) module and a spatial-temporal attention (STA) module to learn the contribution of each frame and each space-time region to an action at a feature level, respectively. We then apply the selected key information to build spatial-temporal pyramid graphs for long-range relation reasoning and more comprehensive action representation learning. STPG-Net can be flexibly integrated into 2D and 3D backbone networks in a plug-and-play manner. Extensive experiments show that it brings consistent improvements over many challenging baselines on several standard action recognition benchmarks (i.e., Something-Something V1 & V2, and FineGym), demonstrating the effectiveness of our approach.

PDF
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Action Recognition Something-Something V1 STPG (8+16frames) Top 1 Accuracy 53.5 # 35
Action Recognition Something-Something V2 STPG (8+16frames) Top-1 Accuracy 67.0 # 72

Methods