Understanding spatial correlation in eye-fixation maps for visual attention in videos

30 Jan 2019 Tariq Alshawi Zhiling Long Ghassan AlRegib

In this paper, we present an analysis of recorded eye-fixation data from human subjects viewing video sequences. The purpose is to better understand visual attention for videos... (read more)

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
🤖 No Methods Found Help the community by adding them if they're not listed; e.g. Deep Residual Learning for Image Recognition uses ResNet