Two-Stream Video Classification with Cross-Modality Attention

1 Aug 2019  ·  Lu Chi, Guiyu Tian, Yadong Mu, Qi Tian ·

Fusing multi-modality information is known to be able to effectively bring significant improvement in video classification. However, the most popular method up to now is still simply fusing each stream's prediction scores at the last stage. A valid question is whether there exists a more effective method to fuse information cross modality. With the development of attention mechanism in natural language processing, there emerge many successful applications of attention in the field of computer vision. In this paper, we propose a cross-modality attention operation, which can obtain information from other modality in a more effective way than two-stream. Correspondingly we implement a compatible block named CMA block, which is a wrapper of our proposed attention operation. CMA can be plugged into many existing architectures. In the experiments, we comprehensively compare our method with two-stream and non-local models widely used in video classification. All experiments clearly demonstrate strong performance superiority by our proposed method. We also analyze the advantages of the CMA block by visualizing the attention map, which intuitively shows how the block helps the final prediction.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Action Classification Kinetics-400 CMA iter1 (16 frames) Acc@1 75.98 # 145
Action Recognition UCF101 CMA iter1-S 3-fold Accuracy 96.5 # 32

Methods


No methods listed for this paper. Add relevant methods here