Part-based Graph Convolutional Network for Action Recognition

13 Sep 2018  ·  Kalpit Thakkar, P. J. Narayanan ·

Human actions comprise of joint motion of articulated body parts or `gestures'. Human skeleton is intuitively represented as a sparse graph with joints as nodes and natural connections between them as edges. Graph convolutional networks have been used to recognize actions from skeletal videos. We introduce a part-based graph convolutional network (PB-GCN) for this task, inspired by Deformable Part-based Models (DPMs). We divide the skeleton graph into four subgraphs with joints shared across them and learn a recognition model using a part-based graph convolutional network. We show that such a model improves performance of recognition, compared to a model using entire skeleton graph. Instead of using 3D joint coordinates as node features, we show that using relative coordinates and temporal displacements boosts performance. Our model achieves state-of-the-art performance on two challenging benchmark datasets NTURGB+D and HDM05, for skeletal action recognition.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Recognition NTU RGB+D PB-GCN (Skeleton only) Accuracy (CS) 87.5 # 21
Accuracy (CV) 93.2 # 18

Results from Other Papers


Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Skeleton Based Action Recognition NTU RGB+D PB-GCN Accuracy (CV) 93.2 # 68
Accuracy (CS) 87.5 # 59

Methods


No methods listed for this paper. Add relevant methods here