Learning Representative Temporal Features for Action Recognition

19 Feb 2018  ·  Ali Javidani, Ahmad Mahmoudi-Aznaveh ·

In this paper, a novel video classification method is presented that aims to recognize different categories of third-person videos efficiently. Our motivation is to achieve a light model that could be trained with insufficient training data. With this intuition, the processing of the 3-dimensional video input is broken to 1D in temporal dimension on top of the 2D in spatial. The processes related to 2D spatial frames are being done by utilizing pre-trained networks with no training phase. The only step which involves training is to classify the 1D time series resulted from the description of the 2D signals. As a matter of fact, optical flow images are first calculated from consecutive frames and described by pre-trained CNN networks. Their dimension is then reduced using PCA. By stacking the description vectors beside each other, a multi-channel time series is created for each video. Each channel of the time series represents a specific feature and follows it over time. The main focus of the proposed method is to classify the obtained time series effectively. Towards this, the idea is to let the machine learn temporal features. This is done by training a multi-channel one dimensional Convolutional Neural Network (1D-CNN). The 1D-CNN learns the features along the only temporal dimension. Hence, the number of training parameters decreases significantly which would result in the trainability of the method on even smaller datasets. It is illustrated that the proposed method could reach the state-of-the-art results on two public datasets UCF11, jHMDB and competitive results on HMDB51.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here