Long-Term Feature Banks for Detailed Video Understanding

To understand the world, we humans constantly need to relate the present to the past, and put events in context. In this paper, we enable existing video models to do the same. We propose a long-term feature bank---supportive information extracted over the entire span of a video---to augment state-of-the-art video models that otherwise would only view short clips of 2-5 seconds. Our experiments demonstrate that augmenting 3D convolutional networks with a long-term feature bank yields state-of-the-art results on three challenging video datasets: AVA, EPIC-Kitchens, and Charades.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Recognition AVA v2.1 LFB (Kinetics-400 pretraining) mAP (Val) 27.7 # 5
Action Classification Charades LFB MAP 42.5 # 25
Egocentric Activity Recognition EPIC-KITCHENS-55 LFB Max Actions Top-1 (S2) 21.2 # 3
Actions Top-1 (S1) 32.70 # 4


No methods listed for this paper. Add relevant methods here