RGB Stream Is Enough for Temporal Action Detection

9 Jul 2021  ·  Chenhao Wang, Hongxiang Cai, Yuxin Zou, Yichao Xiong ·

State-of-the-art temporal action detectors to date are based on two-stream input including RGB frames and optical flow. Although combining RGB frames and optical flow boosts performance significantly, optical flow is a hand-designed representation which not only requires heavy computation, but also makes it methodologically unsatisfactory that two-stream methods are often not learned end-to-end jointly with the flow. In this paper, we argue that optical flow is dispensable in high-accuracy temporal action detection and image level data augmentation (ILDA) is the key solution to avoid performance degradation when optical flow is removed. To evaluate the effectiveness of ILDA, we design a simple yet efficient one-stage temporal action detector based on single RGB stream named DaoTAD. Our results show that when trained with ILDA, DaoTAD has comparable accuracy with all existing state-of-the-art two-stream detectors while surpassing the inference speed of previous methods by a large margin and the inference speed is astounding 6668 fps on GeForce GTX 1080 Ti. Code is available at \url{https://github.com/Media-Smart/vedatad}.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Temporal Action Localization THUMOS’14 DaoTAD mAP IOU@0.5 53.8 # 20
mAP IOU@0.3 62.8 # 25
mAP IOU@0.4 59.5 # 23
mAP IOU@0.6 43.6 # 20
mAP IOU@0.7 30.1 # 20
Avg mAP (0.3:0.7) 50.0 # 25

Methods


No methods listed for this paper. Add relevant methods here