PG-Net: Pixel to Global Matching Network for Visual Tracking

Siamese neural network has been well investigated by tracking frameworks due to its fast speed and high accuracy. However, very few efforts were spent on background-extraction by those approaches. In this paper, a Pixel to Global Matching Network (PG-Net) is proposed to suppress the influence of background in search image while achieving state-of-the-art tracking performance. To achieve this purpose, each pixel on search feature is utilized to calculate the similarity with global template feature. This calculation method can appropriately reduce the matching area, thus introducing less background interference. In addition, we propose a new tracking framework to perform correlation-shared tracking and multiple losses for training, which not only reduce the computational burden but also improve the performance. We conduct comparison experiments on various public tracking datasets, which obtains state-of-the-art performance while running with fast speed.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here