Learning spatio-temporal representations with temporal squeeze pooling

11 Feb 2020  ·  Guoxi Huang, Adrian G. Bors ·

In this paper, we propose a new video representation learning method, named Temporal Squeeze (TS) pooling, which can extract the essential movement information from a long sequence of video frames and map it into a set of few images, named Squeezed Images. By embedding the Temporal Squeeze pooling as a layer into off-the-shelf Convolution Neural Networks (CNN), we design a new video classification model, named Temporal Squeeze Network (TeSNet). The resulting Squeezed Images contain the essential movement information from the video frames, corresponding to the optimization of the video classification task. We evaluate our architecture on two video classification benchmarks, and the results achieved are compared to the state-of-the-art.

PDF Abstract

Results from the Paper


Ranked #43 on Action Recognition on UCF101 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Recognition HMDB-51 TesNet (ImageNet pretrained) Average accuracy of 3 splits 71.5 # 50
Action Recognition UCF101 TesNet (ImageNet pretrained) 3-fold Accuracy 95.2 # 43

Methods