FPAN: Fine-grained and Progressive Attention Localization Network for Data Retrieval

5 Apr 2018  ·  Sijia Chen, Bin Song, Jie Guo, Xiaojiang Du, Mohsen Guizani ·

The Localization of the target object for data retrieval is a key issue in the Intelligent and Connected Transportation Systems (ICTS). However, due to lack of intelligence in the traditional transportation system, it can take tremendous resources to manually retrieve and locate the queried objects among a large number of images. In order to solve this issue, we propose an effective method to query-based object localization that uses artificial intelligence techniques to automatically locate the queried object in the complex background. The presented method is termed as Fine-grained and Progressive Attention Localization Network (FPAN), which uses an image and a queried object as input to accurately locate the target object in the image. Specifically, the fine-grained attention module is naturally embedded into each layer of the convolution neural network (CNN), thereby gradually suppressing the regions that are irrelevant to the queried object and eventually shrinking attention to the target area. We further employ top-down attentions fusion algorithm operated by a learnable cascade up-sampling structure to establish the connection between the attention map and the exact location of the queried object in the original image. Furthermore, the FPAN is trained by multi-task learning with box segmentation loss and cosine loss. At last, we conduct comprehensive experiments on both queried-based digit localization and object tracking with synthetic and benchmark datasets, respectively. The experimental results show that our algorithm is far superior to other algorithms in the synthesis datasets and outperforms most existing trackers on the OTB and VOT datasets.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here