Interpretable 3D Human Action Analysis with Temporal Convolutional Networks

14 Apr 2017  ·  Tae Soo Kim, Austin Reiter ·

The discriminative power of modern deep learning models for 3D human action recognition is growing ever so potent. In conjunction with the recent resurgence of 3D human action representation with 3D skeletons, the quality and the pace of recent progress have been significant. However, the inner workings of state-of-the-art learning based methods in 3D human action recognition still remain mostly black-box. In this work, we propose to use a new class of models known as Temporal Convolutional Neural Networks (TCN) for 3D human action recognition. Compared to popular LSTM-based Recurrent Neural Network models, given interpretable input such as 3D skeletons, TCN provides us a way to explicitly learn readily interpretable spatio-temporal representations for 3D human action recognition. We provide our strategy in re-designing the TCN with interpretability in mind and how such characteristics of the model is leveraged to construct a powerful 3D activity recognition method. Through this work, we wish to take a step towards a spatio-temporal model that is easier to understand, explain and interpret. The resulting model, Res-TCN, achieves state-of-the-art results on the largest 3D human action recognition dataset, NTU-RGBD.

PDF Abstract


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Multimodal Activity Recognition EV-Action TCN (Skeleton Kinect) Accuracy 80.1 # 1
Multimodal Activity Recognition EV-Action TCN (Skeleton Vicon) Accuracy 64.1 # 6
Skeleton Based Action Recognition NTU RGB+D TCN Accuracy (CV) 83.1 # 100
Accuracy (CS) 74.3 # 106
Skeleton Based Action Recognition Varying-view RGB-D Action-Skeleton Res-TCN Accuracy (CS) 63% # 3
Accuracy (CV I) 14% # 6
Accuracy (CV II) 48% # 4
Accuracy (AV I) 48% # 3
Accuracy (AV II) 68% # 3


No methods listed for this paper. Add relevant methods here