|Trend||Dataset||Best Method||Paper title||Paper||Code||Compare|
In this work, we propose a novel gradient-guided network to exploit the discriminative information in gradients and update the template in the siamese network through feed-forward and backward operations.
The problem of visual object tracking has traditionally been handled by variant tracking paradigms, either learning a model of the object's appearance exclusively online or matching the object with the target in an offline-trained embedding space.
Meanwhile, convolutional features are extracted to provide a more comprehensive representation of the object.
Our tracker achieves leading performance in OTB2013, OTB2015, VOT2015, VOT2016 and LaSOT, and operates at a real-time speed of 26 FPS, which indicates our method is effective and practical.
In this work, we propose a novel adaptive spatially-regularized correlation filters (ASRCF) model to simultaneously optimize the filter coefficients and the spatial regularization weight.
SOTA for Visual Tracking on OTB-100
The current strive towards end-to-end trainable computer vision systems imposes major challenges for the task of visual tracking.
It combines a Convolutional Neural Network (CNN) backbone and a cross-correlation operator, and takes advantage of the features from exemplary images for more accurate object tracking.
#2 best model for Visual Object Tracking on VOT2017
Siamese networks have drawn great attention in visual tracking because of their balanced accuracy and speed.
SOTA for Visual Object Tracking on VOT2017