DMC-Net: Generating Discriminative Motion Cues for Fast Compressed Video Action Recognition

Motion has shown to be useful for video understanding, where motion is typically represented by optical flow. However, computing flow from video frames is very time-consuming. Recent works directly leverage the motion vectors and residuals readily available in the compressed video to represent motion at no cost. While this avoids flow computation, it also hurts accuracy since the motion vector is noisy and has substantially reduced resolution, which makes it a less discriminative motion representation. To remedy these issues, we propose a lightweight generator network, which reduces noises in motion vectors and captures fine motion details, achieving a more Discriminative Motion Cue (DMC) representation. Since optical flow is a more accurate motion representation, we train the DMC generator to approximate flow using a reconstruction loss and a generative adversarial loss, jointly with the downstream action classification task. Extensive evaluations on three action recognition benchmarks (HMDB-51, UCF-101, and a subset of Kinetics) confirm the effectiveness of our method. Our full system, consisting of the generator and the classifier, is coined as DMC-Net which obtains high accuracy close to that of using flow and runs two orders of magnitude faster than using optical flow at inference time.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Recognition HMDB-51 I3D RGB + DMC-Net (I3D) Average accuracy of 3 splits 77.8 # 29
Action Recognition HMDB-51 DMC-Net (I3D) Average accuracy of 3 splits 71.8 # 49
Action Recognition HMDB-51 DMC-Net (ResNet-18) Average accuracy of 3 splits 62.8 # 64
Action Recognition UCF101 I3D RGB + DMC-Net (I3D) 3-fold Accuracy 96.5 # 32
Action Recognition UCF101 DMC-Net (I3D) 3-fold Accuracy 92.3 # 61
Action Recognition UCF-101 DMC-Net (ResNet-18) 3-fold Accuracy 90.9 # 1

Methods


No methods listed for this paper. Add relevant methods here