Skeletonnet: Mining deep part features for 3-d action recognition

This letter presents SkeletonNet, a deep learning framework for skeleton-based 3-D action recognition. Given a skeleton sequence, the spatial structure of the skeleton joints in each frame and the temporal information between multiple frames are two important factors for action recognition. We first extract body-part-based features from each frame of the skeleton sequence. Compared to the original coordinates of the skeleton joints, the proposed features are translation, rotation, and scale invariant. To learn robust temporal information, instead of treating the features of all frames as a time series, we transform the features into images and feed them to the proposed deep learning network, which contains two parts: one to extract general features from the input images, while the other to generate a discriminative and compact representation for action recognition. The proposed method is tested on the SBU kinect interaction dataset, the CMU dataset, and the large-scale NTU RGB+D dataset and achieves state-of-the-art performance.

PDF Abstract

Datasets


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Skeleton Based Action Recognition NTU RGB+D SkeletonNet Accuracy (CV) 81.2 # 109
Accuracy (CS) 75.9 # 108

Methods


No methods listed for this paper. Add relevant methods here