From the frame/clip-level feature learning to the video-level representation
building, deep learning methods in action recognition have developed rapidly in
recent years. However, current methods suffer from the confusion caused by
partial observation training, or without end-to-end learning, or restricted to
single temporal scale modeling and so on...
In this paper, we build upon
two-stream ConvNets and propose Deep networks with Temporal Pyramid Pooling
(DTPP), an end-to-end video-level representation learning approach, to address
these problems. Specifically, at first, RGB images and optical flow stacks are
sparsely sampled across the whole video. Then a temporal pyramid pooling layer
is used to aggregate the frame-level features which consist of spatial and
temporal cues. Lastly, the trained model has compact video-level representation
with multiple temporal scales, which is both global and sequence-aware. Experimental results show that DTPP achieves the state-of-the-art performance
on two challenging video action datasets: UCF101 and HMDB51, either by ImageNet
pre-training or Kinetics pre-training.