Transformer Meets Tracker: Exploiting Temporal Context for Robust Visual Tracking

CVPR 2021  ·  Ning Wang, Wengang Zhou, Jie Wang, Houqaing Li ·

In video object tracking, there exist rich temporal contexts among successive frames, which have been largely overlooked in existing trackers. In this work, we bridge the individual video frames and explore the temporal contexts across them via a transformer architecture for robust object tracking. Different from classic usage of the transformer in natural language processing tasks, we separate its encoder and decoder into two parallel branches and carefully design them within the Siamese-like tracking pipelines. The transformer encoder promotes the target templates via attention-based feature reinforcement, which benefits the high-quality tracking model generation. The transformer decoder propagates the tracking cues from previous templates to the current frame, which facilitates the object searching process. Our transformer-assisted tracking framework is neat and trained in an end-to-end manner. With the proposed transformer, a simple Siamese matching approach is able to outperform the current top-performing trackers. By combining our transformer with the recent discriminative tracking pipeline, our method sets several new state-of-the-art records on prevalent tracking benchmarks.

PDF Abstract CVPR 2021 PDF CVPR 2021 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Tracking COESOT TrDiMP Success Rate 60.1 # 8
Precision Rate 66.9 # 9
Visual Object Tracking LaSOT TrDiMP AUC 63.7 # 25
Precision 61.4 # 20

Methods


No methods listed for this paper. Add relevant methods here