no code implementations • 2 Feb 2024 • Hanwei Zhu, Xiangjie Sui, Baoliang Chen, Xuelin Liu, Peilin Chen, Yuming Fang, Shiqi Wang
While abundant research has been conducted on improving high-level visual understanding and reasoning capabilities of large multimodal models~(LMMs), their visual quality assessment~(IQA) ability has been relatively under-explored.
1 code implementation • 7 Sep 2023 • Xiangjie Sui, Hanwei Zhu, Xuelin Liu, Yuming Fang, Shiqi Wang, Zhou Wang
To address these issues, we introduce a unique generative scanpath representation (GSR) for effective quality inference of 360$^\circ$ images, which aggregates varied perceptual experiences of multi-hypothesis users under a predefined viewing condition.
1 code implementation • CVPR 2023 • Xiangjie Sui, Yuming Fang, Hanwei Zhu, Shiqi Wang, Zhou Wang
Scanpath prediction for 360deg images aims to produce dynamic gaze behaviors based on the human visual perception mechanism.
1 code implementation • 13 Jun 2022 • Wen Wen, Mu Li, Yiru Yao, Xiangjie Sui, Yabin Zhang, Long Lan, Yuming Fang, Kede Ma
Investigating how people perceive virtual reality (VR) videos in the wild (i. e., those captured by everyday users) is a crucial and challenging task in VR-related applications due to complex authentic distortions localized in space and time.
2 code implementations • 21 May 2020 • Xiangjie Sui, Kede Ma, Yiru Yao, Yuming Fang
We first carry out a psychophysical experiment to investigate the interplay among the VR viewing conditions, the user viewing behaviors, and the perceived quality of 360{\deg} images.