Towards Efficient Coarse-to-Fine Networks for Action and Gesture Recognition

ECCV 2020  ·  Niamul Quader, Juwei Lu, Peng Dai, Wei Li ·

State-of-the-art approaches to video-based action and gesture recognition often employ two key concepts: First, they employ multistream processing; second, they use an ensemble of convolutional networks. We improve and extend both aspects. First, we systematically yield enhanced receptive fields for complementary feature extraction via coarse-to-fine decomposition of input imagery along the spatial and temporal dimensions, and adaptively focus on training important feature pathways using a reparameterized fully connected layer. Second, we develop a `use when needed' scheme with a `coarse-exit' strategy that allows selective use of expensive high-resolution processing in a data-dependent fashion to retain accuracy while reducing computation cost. Our C2F learning approach builds ensemble networks that outperform most competing methods in terms of both reduced computation cost and improved accuracy on the Something-Something V1, V2, and Jester datasets, while also remaining competitive on the Kinetics-400 dataset. Uniquely, our C2F ensemble networks can operate at varying computation budget constraints.

PDF Abstract

Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Action Classification Jester test C2F Accuracy 97.09 # 1


No methods listed for this paper. Add relevant methods here