Integrating Three Mechanisms of Visual Attention for Active Visual Search

14 Feb 2017  ·  Amir Rasouli, John K. Tsotsos ·

Algorithms for robotic visual search can benefit from the use of visual attention methods in order to reduce computational costs. Here, we describe how three distinct mechanisms of visual attention can be integrated and productively used to improve search performance. The first is viewpoint selection as has been proposed earlier using a greedy search over a probabilistic occupancy grid representation. The second is top-down object-based attention using a histogram backprojection method, also previously described. The third is visual saliency. This is novel in the sense that it is not used as a region-of-interest method for the current image but rather as a noncombinatorial form of look-ahead in search for future viewpoint selection. Additionally, the integration of these three attentional schemes within a single framework is unique and not previously studied. We examine our proposed method in scenarios where little or no information regarding the environment is available. Through extensive experiments on a mobile robot, we show that our method improves visual search performance by reducing the time and number of actions required.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here