Bayesian Hierarchical Dynamic Model for Human Action Recognition

CVPR 2019  ·  Rui Zhao, Wanru Xu, Hui Su, Qiang Ji ·

Human action recognition remains as a challenging task partially due to the presence of large variations in the execution of action. To address this issue, we propose a probabilistic model called Hierarchical Dynamic Model (HDM). Leveraging on Bayesian framework, the model parameters are allowed to vary across different sequences of data, which increase the capacity of the model to adapt to intra-class variations on both spatial and temporal extent of actions. Meanwhile, the generative learning process allows the model to preserve the distinctive dynamic pattern for each action class. Through Bayesian inference, we are able to quantify the uncertainty of the classification, providing insight during the decision process. Compared to state-of-the-art methods, our method not only achieves competitive recognition performance within individual dataset but also shows better generalization capability across different datasets. Experiments conducted on data with missing values also show the robustness of the proposed method.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Skeleton Based Action Recognition Gaming 3D (G3D) HDM-BG Accuracy 92.0 # 3
Skeleton Based Action Recognition MSR Action3D HDM-BG Accuracy 86.1% # 3
Skeleton Based Action Recognition UPenn Action HDM-BG Accuracy 93.4 # 3
Multimodal Activity Recognition UTD-MHAD HDM-BG Accuracy (CS) 92.8 # 3

Methods


No methods listed for this paper. Add relevant methods here