ATOM: Accurate Tracking by Overlap Maximization

While recent years have witnessed astonishing improvements in visual tracking robustness, the advancements in tracking accuracy have been limited. As the focus has been directed towards the development of powerful classifiers, the problem of accurate target state estimation has been largely overlooked. In fact, most trackers resort to a simple multi-scale search in order to estimate the target bounding box. We argue that this approach is fundamentally limited since target estimation is a complex task, requiring high-level knowledge about the object. We address this problem by proposing a novel tracking architecture, consisting of dedicated target estimation and classification components. High level knowledge is incorporated into the target estimation through extensive offline learning. Our target estimation component is trained to predict the overlap between the target object and an estimated bounding box. By carefully integrating target-specific information, our approach achieves previously unseen bounding box accuracy. We further introduce a classification component that is trained online to guarantee high discriminative power in the presence of distractors. Our final tracking framework sets a new state-of-the-art on five challenging benchmarks. On the new large-scale TrackingNet dataset, our tracker ATOM achieves a relative gain of 15% over the previous best approach, while running at over 30 FPS. Code and models are available at https://github.com/visionml/pytracking.

PDF Abstract CVPR 2019 PDF CVPR 2019 Abstract

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Object Tracking FE108 ATOM Success Rate 46.5 # 7
Averaged Precision 71.3 # 7
Visual Object Tracking GOT-10k ATOM Average Overlap 61.0 # 29
Success Rate 0.5 74.2 # 17
Visual Object Tracking LaSOT ATOM AUC 51.4 # 31
Normalized Precision 57.6 # 24
Precision 50.5 # 24
Visual Object Tracking TrackingNet ATOM Precision 64.84 # 23
Normalized Precision 77.11 # 26
Accuracy 70.34 # 24

Methods


No methods listed for this paper. Add relevant methods here