FASTER Recurrent Networks for Efficient Video Classification

10 Jun 2019  ·  Linchao Zhu, Laura Sevilla-Lara, Du Tran, Matt Feiszli, Yi Yang, Heng Wang ·

Typical video classification methods often divide a video into short clips, do inference on each clip independently, then aggregate the clip-level predictions to generate the video-level results. However, processing visually similar clips independently ignores the temporal structure of the video sequence, and increases the computational cost at inference time. In this paper, we propose a novel framework named FASTER, i.e., Feature Aggregation for Spatio-TEmporal Redundancy. FASTER aims to leverage the redundancy between neighboring clips and reduce the computational cost by learning to aggregate the predictions from models of different complexities. The FASTER framework can integrate high quality representations from expensive models to capture subtle motion information and lightweight representations from cheap models to cover scene changes in the video. A new recurrent network (i.e., FAST-GRU) is designed to aggregate the mixture of different representations. Compared with existing approaches, FASTER can reduce the FLOPs by over 10x? while maintaining the state-of-the-art accuracy across popular datasets, such as Kinetics, UCF-101 and HMDB-51.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Recognition HMDB-51 FASTER32 (Kinetics pretrain) Average accuracy of 3 splits 75.7 # 37
Action Classification Kinetics-400 FASTER16 w/o sp Acc@1 71.7 # 170
Action Classification Kinetics-400 FASTER32 Acc@1 75.1 # 150
Action Recognition UCF101 FASTER32 3-fold Accuracy 96.9 # 26

Methods


No methods listed for this paper. Add relevant methods here