Depth Guided Adaptive Meta-Fusion Network for Few-shot Video Recognition

20 Oct 2020  ·  Yuqian Fu, Li Zhang, Junke Wang, Yanwei Fu, Yu-Gang Jiang ·

Humans can easily recognize actions with only a few examples given, while the existing video recognition models still heavily rely on the large-scale labeled data inputs. This observation has motivated an increasing interest in few-shot video action recognition, which aims at learning new actions with only very few labeled samples. In this paper, we propose a depth guided Adaptive Meta-Fusion Network for few-shot video recognition which is termed as AMeFu-Net. Concretely, we tackle the few-shot recognition problem from three aspects: firstly, we alleviate this extremely data-scarce problem by introducing depth information as a carrier of the scene, which will bring extra visual information to our model; secondly, we fuse the representation of original RGB clips with multiple non-strictly corresponding depth clips sampled by our temporal asynchronization augmentation mechanism, which synthesizes new instances at feature-level; thirdly, a novel Depth Guided Adaptive Instance Normalization (DGAdaIN) fusion module is proposed to fuse the two-stream modalities efficiently. Additionally, to better mimic the few-shot recognition process, our model is trained in the meta-learning way. Extensive experiments on several action recognition benchmarks demonstrate the effectiveness of our model.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few Shot Action Recognition HMDB51 AMeFu-Net 1:1 Accuracy 75.5 # 5
Few Shot Action Recognition Kinetics-100 AMeFu-Net Accuracy 86.8 # 1
Few Shot Action Recognition UCF101 AMeFu-Net 1:1 Accuracy 95.5 # 4

Methods