Chained Multi-stream Networks Exploiting Pose, Motion, and Appearance for Action Classification and Detection

General human action recognition requires understanding of various visual cues. In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images. For the integration, we introduce a Markov chain model which adds cues successively. The resulting approach is efficient and applicable to action classification as well as to spatial and temporal action localization. The two contributions clearly improve the performance over respective baselines. The overall approach achieves state-of-the-art action classification performance on HMDB51, J-HMDB and NTU RGB+D datasets. Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB.

PDF Abstract ICCV 2017 PDF ICCV 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Skeleton Based Action Recognition J-HMDB Chained (RGB+Flow +Pose) Accuracy (RGB+pose) 76.1 # 6
Accuracy (pose) 56.8 # 4
Skeleton Based Action Recognition JHMDB (2D poses only) Chained Average accuracy of 3 splits 56.8 # 5


No methods listed for this paper. Add relevant methods here