A Dense-Sparse Complementary Network for Human Action Recognition based on RGB and Skeleton Modalities
The vulnerability of RGB-based human action recognition in complex environment and variational scenes can be compensated by skeleton modality. Therefore, action recognition methods fusing RGB and skeleton modalities have received increasing attention. However, the recognition performance of the existing methods is still not satisfactory due to the insufficiently optimized sampling, modeling and fusion strategy, even the computational cost is heavy. In this paper, we propose a Dense-Sparse Complementary Network (DSCNet), which aims to leverage the complementary information of the RGB and skeleton modalities at light computational cost to obtain the competitive action recognition performance. Specifically, we first adopt dense and sparse sampling strategies according to the advantages of RGB and skeleton modalities, respectively. And then, we use the skeleton as guiding information to crop the key active region of the persons in the RGB frame, which largely eliminates the interference of the background. Moreover, a Short-Term Motion Extraction Module (STMEM) is proposed to compress the densely sampled RGB frames to fewer frames before feeding them into the backbone network, which avoids a surge in computational cost. And a Sparse Multi-Scale Spatial–Temporal convolutional neural Network (Sparse-MSSTNet) is designed to modeling sparse skeleton. Extensive experiments show that our method effectively combines complementary information of RGB and skeleton modalities to improve recognition accuracy. The DSCNet achieves competitive performance on NTU RGB+D 60, NTU RGB+D 120, PKU-MMD, UAV-human, IKEA ASM and Northwest-UCLA datasets with much less computational cost than exiting methods. The code is available at https://github.com/Maxchengqin/DSCNet.
PDFCode
Datasets
Task | Dataset | Model | Metric Name | Metric Value | Global Rank | Benchmark |
---|---|---|---|---|---|---|
Action Recognition | NTU RGB+D | DSCNet (RGB + Pose) | Accuracy (CS) | 97.4 | # 1 | |
Accuracy (CV) | 99.4 | # 2 | ||||
Action Recognition | NTU RGB+D 120 | DSCNet (RGB + Pose) | Accuracy (Cross-Subject) | 95.6 | # 2 | |
Accuracy (Cross-Setup) | 96.7 | # 1 | ||||
Skeleton Based Action Recognition | N-UCLA | DSCNet (RGB + Pose) | Accuracy | 99.1 | # 1 | |
Action Recognition In Videos | PKU-MMD | DSCNet (RGB + Pose) | X-Sub | 97.4 | # 1 | |
X-View | 98.8 | # 1 | ||||
Skeleton Based Action Recognition | UAV-Human | DSCNet (RGB + Pose) | CSv1(%) | 47.3 | # 2 | |
CSv2(%) | 71.1 | # 3 |