End-to-End Temporal Action Detection with 1B Parameters Across 1000 Frames

28 Nov 2023  Â·  Shuming Liu, Chen-Lin Zhang, Chen Zhao, Bernard Ghanem ·

Recently, temporal action detection (TAD) has seen significant performance improvement with end-to-end training. However, due to the memory bottleneck, only models with limited scales and limited data volumes can afford end-to-end training, which inevitably restricts TAD performance. In this paper, we reduce the memory consumption for end-to-end training, and manage to scale up the TAD backbone to 1 billion parameters and the input video to 1,536 frames, leading to significant detection performance. The key to our approach lies in our proposed temporal-informative adapter (TIA), which is a novel lightweight module that reduces training memory. Using TIA, we free the humongous backbone from learning to adapt to the TAD task by only updating the parameters in TIA. TIA also leads to better TAD representation by temporally aggregating context from adjacent frames throughout the backbone. We evaluate our model across four representative datasets. Owing to our efficient design, we are able to train end-to-end on VideoMAEv2-giant and achieve 75.4% mAP on THUMOS14, being the first end-to-end model to outperform the best feature-based methods. Code is available at https://github.com/sming256/AdaTAD.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Temporal Action Localization ActivityNet-1.3 AdaTAD (VideoMAEv2-giant) mAP IOU@0.5 61.72 # 2
mAP 41.93 # 3
mAP IOU@0.75 43.35 # 2
mAP IOU@0.95 10.85 # 1
Temporal Action Localization EPIC-KITCHENS-100 AdaTAD (verb, VideoMAE-L) Avg mAP (0.1-0.5) 29.3 # 1
mAP IOU@0.1 33.1 # 1
mAP IOU@0.2 32.2 # 1
mAP IOU@0.3 30.4 # 1
mAP IOU@0.4 27.5 # 1
mAP IOU@0.5 23.1 # 1
Temporal Action Localization THUMOS’14 AdaTAD (VideoMAEv2-giant) mAP IOU@0.5 79.4 # 1
mAP IOU@0.3 90.1 # 1
mAP IOU@0.4 85.9 # 1
mAP IOU@0.6 67.6 # 1
mAP IOU@0.7 53.8 # 1
Avg mAP (0.3:0.7) 75.4 # 1

Methods