Explore Human Parsing Modality for Action Recognition

Multimodal-based action recognition methods have achieved high success using pose and RGB modality. However, skeletons sequences lack appearance depiction and RGB images suffer irrelevant noise due to modality limitations. To address this, we introduce human parsing feature map as a novel modality, since it can selectively retain effective semantic features of the body parts, while filtering out most irrelevant noise. We propose a new dual-branch framework called Ensemble Human Parsing and Pose Network (EPP-Net), which is the first to leverage both skeletons and human parsing modalities for action recognition. The first human pose branch feeds robust skeletons in graph convolutional network to model pose features, while the second human parsing branch also leverages depictive parsing feature maps to model parsing festures via convolutional backbones. The two high-level features will be effectively combined through a late fusion strategy for better action recognition. Extensive experiments on NTU RGB+D and NTU RGB+D 120 benchmarks consistently verify the effectiveness of our proposed EPP-Net, which outperforms the existing action recognition methods. Our code is available at: https://github.com/liujf69/EPP-Net-Action.

PDF Abstract CAAI Transactions 2023 PDF

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Action Recognition NTU RGB+D EPP-Net (Parsing + Pose) Accuracy (CS) 94.7 # 7
Accuracy (CV) 97.7 # 11
Action Recognition NTU RGB+D 120 EPP-Net (Parsing + Pose) Accuracy (Cross-Subject) 91.1 # 7
Accuracy (Cross-Setup) 92.8 # 4

Methods


No methods listed for this paper. Add relevant methods here