Fusing Higher-order Features in Graph Neural Networks for Skeleton-based Action Recognition

4 May 2021  ·  Zhenyue Qin, Yang Liu, Pan Ji, Dongwoo Kim, Lei Wang, Bob McKay, Saeed Anwar, Tom Gedeon ·

Skeleton sequences are lightweight and compact, and thus are ideal candidates for action recognition on edge devices. Recent skeleton-based action recognition methods extract features from 3D joint coordinates as spatial-temporal cues, using these representations in a graph neural network for feature fusion to boost recognition performance. The use of first- and second-order features, i.e., joint and bone representations, has led to high accuracy. Nonetheless, many models are still confused by actions that have similar motion trajectories. To address these issues, we propose fusing higher-order features in the form of angular encoding into modern architectures to robustly capture the relationships between joints and body parts. This simple fusion with popular spatial-temporal graph neural networks achieves new state-of-the-art accuracy in two large benchmarks, including NTU60 and NTU120, while employing fewer parameters and reduced run time. Our source code is publicly available at: https://github.com/ZhenyueQin/Angular-Skeleton-Encoding.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Skeleton Based Action Recognition NTU RGB+D AngNet-JA + BA + JBA + VJBA Accuracy (CV) 96.4 # 26
Accuracy (CS) 91.7 # 24
Skeleton Based Action Recognition NTU RGB+D 120 AngNet-JA + BA + JBA + VJBA Accuracy (Cross-Subject) 88.2% # 19
Accuracy (Cross-Setup) 89.2% # 20
Ensembled Modalities 4 # 1

Methods