Jointly Modeling Motion and Appearance Cues for Robust RGB-T Tracking

4 Jul 2020  ·  Pengyu Zhang, Jie Zhao, Dong Wang, Huchuan Lu, Xiaoyun Yang ·

In this study, we propose a novel RGB-T tracking framework by jointly modeling both appearance and motion cues. First, to obtain a robust appearance model, we develop a novel late fusion method to infer the fusion weight maps of both RGB and thermal (T) modalities. The fusion weights are determined by using offline-trained global and local multimodal fusion networks, and then adopted to linearly combine the response maps of RGB and T modalities. Second, when the appearance cue is unreliable, we comprehensively take motion cues, i.e., target and camera motions, into account to make the tracker robust. We further propose a tracker switcher to switch the appearance and motion trackers flexibly. Numerous results on three recent RGB-T tracking datasets show that the proposed tracker performs significantly better than other state-of-the-art algorithms.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Rgb-T Tracking GTOT JMMAC Precision 90.2 # 4
Success 73.2 # 5
Rgb-T Tracking RGBT234 JMMAC Precision 79.0 # 17
Success 57.3 # 18

Methods


No methods listed for this paper. Add relevant methods here