Few-Shot Temporal Action Localization with Query Adaptive Transformer

20 Oct 2021  ·  Sauradip Nag, Xiatian Zhu, Tao Xiang ·

Existing temporal action localization (TAL) works rely on a large number of training videos with exhaustive segment-level annotation, preventing them from scaling to new classes. As a solution to this problem, few-shot TAL (FS-TAL) aims to adapt a model to a new class represented by as few as a single video. Exiting FS-TAL methods assume trimmed training videos for new classes. However, this setting is not only unnatural actions are typically captured in untrimmed videos, but also ignores background video segments containing vital contextual cues for foreground action segmentation. In this work, we first propose a new FS-TAL setting by proposing to use untrimmed training videos. Further, a novel FS-TAL model is proposed which maximizes the knowledge transfer from training classes whilst enabling the model to be dynamically adapted to both the new class and each video of that class simultaneously. This is achieved by introducing a query adaptive Transformer in the model. Extensive experiments on two action localization benchmarks demonstrate that our method can outperform all the state of the art alternatives significantly in both single-domain and cross-domain scenarios. The source code can be found in https://github.com/sauradip/fewshotQAT

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Few Shot Temporal Action Localization ActivityNet FS-QAT mIoU 38.5 # 1
Few Shot Temporal Action Localization THUMOS14 FS-QAT mIoU 30.2 # 1