AiATrack: Attention in Attention for Transformer Visual Tracking

20 Jul 2022  ·  Shenyuan Gao, Chunluan Zhou, Chao Ma, Xinggang Wang, Junsong Yuan ·

Transformer trackers have achieved impressive advancements recently, where the attention mechanism plays an important role. However, the independent correlation computation in the attention mechanism could result in noisy and ambiguous attention weights, which inhibits further performance improvement. To address this issue, we propose an attention in attention (AiA) module, which enhances appropriate correlations and suppresses erroneous ones by seeking consensus among all correlation vectors. Our AiA module can be readily applied to both self-attention blocks and cross-attention blocks to facilitate feature aggregation and information propagation for visual tracking. Moreover, we propose a streamlined Transformer tracking framework, dubbed AiATrack, by introducing efficient feature reuse and target-background embeddings to make full use of temporal references. Experiments show that our tracker achieves state-of-the-art performance on six tracking benchmarks while running at a real-time speed.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Tracking COESOT AiATrack Success Rate 59.0 # 10
Precision Rate 67.4 # 7
Visual Object Tracking GOT-10k AiATrack Average Overlap 69.6 # 16
Success Rate 0.5 80.0 # 10
Success Rate 0.75 63.2 # 13
Visual Object Tracking LaSOT AiATrack AUC 69.0 # 18
Normalized Precision 79.4 # 12
Precision 73.8 # 15
Visual Object Tracking NeedForSpeed AiATrack AUC 0.679 # 2
Visual Object Tracking OTB-100 AiATrack AUC 0.696 # 2
Visual Object Tracking TrackingNet AiATrack Precision 80.4 # 13
Normalized Precision 87.8 # 13
Accuracy 82.7 # 16
Visual Object Tracking UAV123 AiATrack AUC 0.706 # 5

Methods