Cross-modal Orthogonal High-rank Augmentation for RGB-Event Transformer-trackers

ICCV 2023  Â·  Zhiyu Zhu, Junhui Hou, Dapeng Oliver Wu ·

This paper addresses the problem of cross-modal object tracking from RGB videos and event data. Rather than constructing a complex cross-modal fusion network, we explore the great potential of a pre-trained vision Transformer (ViT). Particularly, we delicately investigate plug-and-play training augmentations that encourage the ViT to bridge the vast distribution gap between the two modalities, enabling comprehensive cross-modal information interaction and thus enhancing its ability. Specifically, we propose a mask modeling strategy that randomly masks a specific modality of some tokens to enforce the interaction between tokens from different modalities interacting proactively. To mitigate network oscillations resulting from the masking strategy and further amplify its positive effect, we then theoretically propose an orthogonal high-rank loss to regularize the attention matrix. Extensive experiments demonstrate that our plug-and-play training augmentation techniques can significantly boost state-of-the-art one-stream and twostream trackers to a large extent in terms of both tracking precision and success rate. Our new perspective and findings will potentially bring insights to the field of leveraging powerful pre-trained ViTs to model cross-modal data. The code will be publicly available.

PDF Abstract ICCV 2023 PDF ICCV 2023 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Tracking COESOT HR-CEUTrack-Large Success Rate 65.0 # 1
Precision Rate 73.8 # 1
Object Tracking COESOT HR-CEUTrack-Base Success Rate 63.2 # 2
Precision Rate 71.9 # 2
Object Tracking FE108 HR-MonTrack-Tiny Success Rate 66.3 # 2
Averaged Precision 95.3 # 2
Object Tracking FE108 HR-MonTrack-Base Success Rate 68.5 # 1
Averaged Precision 96.2 # 1

Methods