Joint Feature Learning and Relation Modeling for Tracking: A One-Stream Framework

22 Mar 2022  ·  Botao Ye, Hong Chang, Bingpeng Ma, Shiguang Shan ·

The current popular two-stream, two-stage tracking framework extracts the template and the search region features separately and then performs relation modeling, thus the extracted features lack the awareness of the target and have limited target-background discriminability. To tackle the above issue, we propose a novel one-stream tracking (OSTrack) framework that unifies feature learning and relation modeling by bridging the template-search image pairs with bidirectional information flows. In this way, discriminative target-oriented features can be dynamically extracted by mutual guidance. Since no extra heavy relation modeling module is needed and the implementation is highly parallelized, the proposed tracker runs at a fast speed. To further improve the inference efficiency, an in-network candidate early elimination module is proposed based on the strong similarity prior calculated in the one-stream framework. As a unified framework, OSTrack achieves state-of-the-art performance on multiple benchmarks, in particular, it shows impressive results on the one-shot tracking benchmark GOT-10k, i.e., achieving 73.7% AO, improving the existing best result (SwinTrack) by 4.3%. Besides, our method maintains a good performance-speed trade-off and shows faster convergence. The code and models will be available at https://github.com/botaoye/OSTrack.

PDF Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Object Tracking GOT-10k OSTrack-384 Average Overlap 73.7 # 1
Success Rate 0.5 83.2 # 1
Success Rate 0.75 70.8 # 1
Visual Object Tracking LaSOT OSTrack-384 AUC 71.1 # 1
Normalized Precision 81.1 # 1
Precision 77.6 # 1
Visual Object Tracking TrackingNet OSTrack-384 Precision 83.2 # 1
Normalized Precision 88.5 # 2
Accuracy 83.9 # 2
Visual Object Tracking UAV123 OSTrack -384 AUC 0.707 # 1

Methods


No methods listed for this paper. Add relevant methods here