Convolutional Regression for Visual Tracking

14 Nov 2016  ·  Kai Chen, Wenbing Tao ·

Recently, discriminatively learned correlation filters (DCF) has drawn much attention in visual object tracking community. The success of DCF is potentially attributed to the fact that a large amount of samples are utilized to train the ridge regression model and predict the location of object. To solve the regression problem in an efficient way, these samples are all generated by circularly shifting from a search patch. However, these synthetic samples also induce some negative effects which weaken the robustness of DCF based trackers. In this paper, we propose a Convolutional Regression framework for visual tracking (CRT). Instead of learning the linear regression model in a closed form, we try to solve the regression problem by optimizing a one-channel-output convolution layer with Gradient Descent (GD). In particular, the receptive field size of the convolution layer is set to the size of object. Contrary to DCF, it is possible to incorporate all "real" samples clipped from the whole image. A critical issue of the GD approach is that most of the convolutional samples are negative and the contribution of positive samples will be suppressed. To address this problem, we propose a novel Automatic Hard Negative Mining method to eliminate easy negatives and enhance positives. Extensive experiments are conducted on a widely-used benchmark with 100 sequences. The results show that the proposed algorithm achieves outstanding performance and outperforms almost all the existing DCF based algorithms.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods