ConvNet Architecture Search for Spatiotemporal Feature Learning

16 Aug 2017  ·  Du Tran, Jamie Ray, Zheng Shou, Shih-Fu Chang, Manohar Paluri ·

Learning image representations with ConvNets by pre-training on ImageNet has proven useful across many visual understanding tasks including object detection, semantic segmentation, and image captioning. Although any image representation can be applied to video frames, a dedicated spatiotemporal representation is still vital in order to incorporate motion patterns that cannot be captured by appearance based models alone. This paper presents an empirical ConvNet architecture search for spatiotemporal feature learning, culminating in a deep 3-dimensional (3D) Residual ConvNet. Our proposed architecture outperforms C3D by a good margin on Sports-1M, UCF101, HMDB51, THUMOS14, and ASLAN while being 2 times faster at inference time, 2 times smaller in model size, and having a more compact representation.

PDF Abstract

Results from the Paper

Results from Other Papers

Task Dataset Model Metric Name Metric Value Rank Source Paper Compare
Action Recognition HMDB-51 Res3D Average accuracy of 3 splits 54.9 # 71
Action Classification Kinetics-400 TSN Acc@1 73.9 # 158
Action Recognition UCF101 Res3D 3-fold Accuracy 85.8 # 76


No methods listed for this paper. Add relevant methods here