What and Where: Modeling Skeletons from Semantic and Spatial Perspectives for Action Recognition

7 Apr 2020  ·  Lei Shi, Yifan Zhang, Jian Cheng, Hanqing Lu ·

Skeleton data, which consists of only the 2D/3D coordinates of the human joints, has been widely studied for human action recognition. Existing methods take the semantics as prior knowledge to group human joints and draw correlations according to their spatial locations, which we call the semantic perspective for skeleton modeling. In this paper, in contrast to previous approaches, we propose to model skeletons from a novel spatial perspective, from which the model takes the spatial location as prior knowledge to group human joints and mines the discriminative patterns of local areas in a hierarchical manner. The two perspectives are orthogonal and complementary to each other; and by fusing them in a unified framework, our method achieves a more comprehensive understanding of the skeleton data. Besides, we customized two networks for the two perspectives. From the semantic perspective, we propose a Transformer-like network that is expert in modeling joint correlations, and present three effective techniques to adapt it for skeleton data. From the spatial perspective, we transform the skeleton data into the sparse format for efficient feature extraction and present two types of sparse convolutional networks for sparse skeleton modeling. Extensive experiments are conducted on three challenging datasets for skeleton-based human action/gesture recognition, namely, NTU-60, NTU-120 and SHREC, where our method achieves state-of-the-art performance.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here