Toward Better Understanding of Saliency Prediction in Augmented 360 Degree Videos

Augmented reality (AR) overlays digital content onto the reality. In AR system, correct and precise estimations of user's visual fixations and head movements can enhance the quality of experience by allocating more computation resources on the areas of interest. However, there is inadequate research about understanding the visual exploration of users when using an AR system or modeling AR visual attention. To bridge the gap between the saliency prediction on real-world scene and on scene augmented by virtual information, we construct the ARVR saliency dataset with 12 diverse videos viewed by 20 people. The virtual reality (VR) technique is employed to simulate the real-world. Annotations of object recognition and tracking as augmented contents are blended into the omnidirectional videos. The saliency annotations of head and eye movements for both original and augmented videos are collected and together constitute the ARVR dataset. We also design a model which is capable of solving the saliency prediction problem in AR. Local block images are extracted to simulate the viewport and offset the projection distortion. Conspicuous visual cues in local viewports are extracted to constitute the spatial features. The optical flow information is estimated as the important temporal feature. We also consider the interplay between virtual information and reality. The composition of the augmentation information is distinguished, and the joint effects of adversarial augmentation and complementary augmentation are estimated. We generate a graph by taking each block image as one node. Both the visual saliency mechanism and the characteristics of viewing behaviors are considered in the computation of edge weights on the graph which are interpreted as Markov chains. The fraction of the visual attention that is diverted to each block image is estimated through equilibrium distribution on of this chain.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here