no code implementations • 7 Oct 2024 • Chunyi Li, Jianbo Zhang, ZiCheng Zhang, HaoNing Wu, Yuan Tian, Wei Sun, Guo Lu, Xiaohong Liu, Xiongkuo Min, Weisi Lin, Guangtao Zhai
However, various corruptions in the real world mean that images will not be as ideal as in simulations, presenting significant challenges for the practical application of LMMs.
no code implementations • 5 Oct 2024 • Ivan Molodetskikh, Artem Borisov, Dmitriy Vatolin, Radu Timofte, Jianzhao Liu, Tianwu Zhi, Yabin Zhang, Yang Li, Jingwen Xu, Yiting Liao, Qing Luo, Ao-Xiang Zhang, Peng Zhang, Haibo Lei, Linyan Jiang, Yaqing Li, Yuqin Cao, Wei Sun, Weixia Zhang, Yinan Sun, Ziheng Jia, Yuxin Zhu, Xiongkuo Min, Guangtao Zhai, Weihua Luo, Yupeng Z., Hong Y
This paper presents the Video Super-Resolution (SR) Quality Assessment (QA) Challenge that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024.
no code implementations • 30 Sep 2024 • ZiCheng Zhang, Ziheng Jia, HaoNing Wu, Chunyi Li, Zijian Chen, Yingjie Zhou, Wei Sun, Xiaohong Liu, Xiongkuo Min, Weisi Lin, Guangtao Zhai
With the rising interest in research on Large Multi-modal Models (LMMs) for video understanding, many studies have emphasized general video comprehension capabilities, neglecting the systematic exploration into video quality understanding.
no code implementations • 26 Sep 2024 • Zehao Zhu, Wei Sun, Jun Jia, Wei Wu, Sibin Deng, Kai Li, Ying Chen, Xiongkuo Min, Jia Wang, Guangtao Zhai
For the subjective QoE study, we introduce the first live video streaming QoE dataset, TaoLive QoE, which consists of $42$ source videos collected from real live broadcasts and $1, 155$ corresponding distorted ones degraded due to a variety of streaming distortions, including conventional streaming distortions such as compression, stalling, as well as live streaming-specific distortions like frame skipping, variable frame rate, etc.
1 code implementation • 23 Sep 2024 • Andrey Moskalenko, Alexey Bryncev, Dmitry Vatolin, Radu Timofte, Gen Zhan, Li Yang, Yunlong Tang, Yiting Liao, Jiongzhi Lin, Baitao Huang, Morteza Moradi, Mohammad Moradi, Francesco Rundo, Concetto Spampinato, Ali Borji, Simone Palazzo, Yuxin Zhu, Yinan Sun, Huiyu Duan, Yuqin Cao, Ziheng Jia, Qiang Hu, Xiongkuo Min, Guangtao Zhai, Hao Fang, Runmin Cong, Xiankai Lu, Xiaofei Zhou, Wei zhang, Chunyu Zhao, Wentao Mu, Tao Deng, Hamed R. Tavakoli
The goal of the participants was to develop a method for predicting accurate saliency maps for the provided set of video sequences.
no code implementations • 17 Sep 2024 • Yongyang Pan, Xiaohong Liu, Siqi Luo, Yi Xin, Xiao Guo, Xiaoming Liu, Xiongkuo Min, Guangtao Zhai
Rapid advancements in multimodal large language models have enabled the creation of hyper-realistic images from textual descriptions.
no code implementations • 15 Sep 2024 • Yinan Sun, ZiCheng Zhang, HaoNing Wu, Xiaohong Liu, Weisi Lin, Guangtao Zhai, Xiongkuo Min
However, these models also exhibit hallucinations, which limit their reliability as AI systems, especially in tasks involving low-level visual perception and understanding.
no code implementations • 11 Sep 2024 • Yingjie Zhou, ZiCheng Zhang, Farong Wen, Jun Jia, Yanwei Jiang, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai
To provide a valuable resource for future research and development in 3D content generation and quality assessment, the dataset has been open-sourced in https://github. com/zyj-2000/3DGCQA.
no code implementations • 9 Sep 2024 • Xiongkuo Min, Yixuan Gao, Yuqin Cao, Guangtao Zhai, Wenjun Zhang, Huifang Sun, Chang Wen Chen
RichIQA is characterized by two key novel designs: (1) a three-stage image quality prediction network which exploits the powerful feature representation capability of the Convolutional vision Transformer (CvT) and mimics the short-term and long-term memory mechanisms of human brain; (2) a multi-label training strategy in which rich subjective quality information like MOS, SOS and DOS are concurrently used to train the quality prediction network.
1 code implementation • 1 Sep 2024 • Wei Sun, Weixia Zhang, Yuqin Cao, Linhan Cao, Jun Jia, Zijian Chen, ZiCheng Zhang, Xiongkuo Min, Guangtao Zhai
To address this problem, we design a multi-branch deep neural network (DNN) to assess the quality of UHD images from three perspectives: global aesthetic characteristics, local technical distortions, and salient content perception.
no code implementations • 26 Aug 2024 • Qihang Ge, Wei Sun, Yu Zhang, Yunhao Li, Zhongpeng Ji, Fengyu Sun, Shangling Jui, Xiongkuo Min, Guangtao Zhai
Then, we design a spatiotemporal vision encoder to extract spatial and temporal features to represent the quality characteristics of videos, which are subsequently mapped into the language space by the spatiotemporal projector for modality alignment.
1 code implementation • 21 Aug 2024 • Maksim Smirnov, Aleksandr Gushchin, Anastasia Antsiferova, Dmitry Vatolin, Radu Timofte, Ziheng Jia, ZiCheng Zhang, Wei Sun, Jiaying Qian, Yuqin Cao, Yinan Sun, Yuxin Zhu, Xiongkuo Min, Guangtao Zhai, Kanjar De, Qing Luo, Ao-Xiang Zhang, Peng Zhang, Haibo Lei, Linyan Jiang, Yaqing Li, Wenhui Meng, Xiaoheng Tan, Haiqiang Wang, Xiaozhong Xu, Shan Liu, Zhenzhong Chen, Zhengxue Cheng, Jiahao Xiao, Jun Xu, Chenlong He, Qi Zheng, Ruoxi Zhu, Min Li, Yibo Fan, Zhengzhong Tu
The challenge aimed to evaluate the performance of VQA methods on a diverse dataset of 459 videos, encoded with 14 codecs of various compression standards (AVC/H. 264, HEVC/H. 265, AV1, and VVC/H. 266) and containing a comprehensive collection of compression artifacts.
no code implementations • 10 Aug 2024 • Yuxin Zhu, Huiyu Duan, Kaiwei Zhang, Yucheng Zhu, Xilei Zhu, Long Teng, Xiongkuo Min, Guangtao Zhai
To advance the research on audio-visual saliency prediction for ODVs, we further establish a new benchmark based on the AVS-ODV database by testing numerous state-of-the-art saliency models, including visual-only models and audio-visual models.
no code implementations • 8 Aug 2024 • Linhan Cao, Wei Sun, Xiongkuo Min, Jun Jia, ZiCheng Zhang, Zijian Chen, Yucheng Zhu, Lizhou Liu, Qiubo Chen, Jing Chen, Guangtao Zhai
Just noticeable distortion (JND), representing the threshold of distortion in an image that is minimally perceptible to the human visual system (HVS), is crucial for image compression algorithms to achieve a trade-off between transmission bit rate and image quality.
no code implementations • 31 Jul 2024 • Xilei Zhu, Liu Yang, Huiyu Duan, Xiongkuo Min, Guangtao Zhai, Patrick Le Callet
However, the corresponding image quality assessment (IQA) research for egocentric spatial images is still lacking.
no code implementations • 31 Jul 2024 • Zhichao Zhang, Xinyue Li, Wei Sun, Jun Jia, Xiongkuo Min, ZiCheng Zhang, Chunyi Li, Zijian Chen, Puyi Wang, Zhongpeng Ji, Fengyu Sun, Shangling Jui, Guangtao Zhai
For the objective perspective, we establish a benchmark for evaluating existing quality assessment metrics on the LGVQ dataset, which reveals that current metrics perform poorly on the LGVQ dataset.
1 code implementation • 30 Jul 2024 • Huiyu Duan, Xiongkuo Min, Sijing Wu, Wei Shen, Guangtao Zhai
In this paper, we propose a text-induced unified image processor for low-level vision tasks, termed UniProcessor, which can effectively process various degradation types and levels, and support multimodal control.
no code implementations • 29 Jul 2024 • Yuqin Cao, Xiongkuo Min, Yixuan Gao, Wei Sun, Weisi Lin, Guangtao Zhai
In this paper, we propose the Unified No-reference Quality Assessment model (UNQA) for audio, image, video, and A/V content, which tries to train a single QA model across different media modalities.
1 code implementation • 18 Jul 2024 • Ruiyi Wang, Wenhao Li, Xiaohong Liu, Chunyi Li, ZiCheng Zhang, Xiongkuo Min, Guangtao Zhai
Existing methods have achieved remarkable performance in single image dehazing, particularly on synthetic datasets.
1 code implementation • 17 Jul 2024 • Han Zhou, Wei Dong, Xiaohong Liu, Shuaicheng Liu, Xiongkuo Min, Guangtao Zhai, Jun Chen
Most existing Low-light Image Enhancement (LLIE) methods either directly map Low-Light (LL) to Normal-Light (NL) images or use semantic or illumination maps as guides.
Ranked #1 on Low-Light Image Enhancement on LOLv2-synthetic
no code implementations • 22 Jun 2024 • Shiqi Gao, Huiyu Duan, Xinyue Li, Kang Fu, Yicong Peng, Qihang Xu, Yuanyuan Chang, Jia Wang, Xiongkuo Min, Guangtao Zhai
In this paper, we propose a quality-guided image enhancement paradigm that enables image enhancement models to learn the distribution of images with various quality ratings.
1 code implementation • 13 Jun 2024 • Chunyi Li, Xiele Wu, HaoNing Wu, Donghui Feng, ZiCheng Zhang, Guo Lu, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, Weisi Lin
With the development of Large Multimodal Models (LMMs), a Cross Modality Compression (CMC) paradigm of Image-Text-Image has emerged.
1 code implementation • 10 Jun 2024 • Zijian Chen, Wei Sun, Yuan Tian, Jun Jia, ZiCheng Zhang, Jiarui Wang, Ru Huang, Xiongkuo Min, Guangtao Zhai, Wenjun Zhang
Assessing action quality is both imperative and challenging due to its significant impact on the quality of AI-generated videos, further complicated by the inherently ambiguous nature of actions within AI-generated video (AIGV).
1 code implementation • 5 Jun 2024 • ZiCheng Zhang, HaoNing Wu, Chunyi Li, Yingjie Zhou, Wei Sun, Xiongkuo Min, Zijian Chen, Xiaohong Liu, Weisi Lin, Guangtao Zhai
How to accurately and efficiently assess AI-generated images (AIGIs) remains a critical challenge for generative models.
1 code implementation • 14 May 2024 • Wei Sun, Weixia Zhang, Yanwei Jiang, HaoNing Wu, ZiCheng Zhang, Jun Jia, Yingjie Zhou, Zhongpeng Ji, Xiongkuo Min, Weisi Lin, Guangtao Zhai
We employ the fidelity loss to train the model via a learning-to-rank manner to mitigate inconsistencies in quality scores in the portrait image quality assessment dataset PIQ.
1 code implementation • 14 May 2024 • Wei Sun, HaoNing Wu, ZiCheng Zhang, Jun Jia, Zhichao Zhang, Linhan Cao, Qiubo Chen, Xiongkuo Min, Weisi Lin, Guangtao Zhai
Motivated by previous researches that leverage pre-trained features extracted from various computer vision models as the feature representation for BVQA, we further explore rich quality-aware features from pre-trained blind image quality assessment (BIQA) and BVQA models as auxiliary features to help the BVQA model to handle complex distortions and diverse content of social media videos.
1 code implementation • 12 May 2024 • Jiarui Wang, Huiyu Duan, Guangtao Zhai, Xiongkuo Min
Artificial Intelligence Generated Content (AIGC) has grown rapidly in recent years, among which AI-based image generation has gained widespread attention due to its efficient and imaginative image creation ability.
no code implementations • 10 May 2024 • Hanchi Sun, Xiaohong Liu, Xinyang Jiang, Yifei Shen, Dongsheng Li, Xiongkuo Min, Guangtao Zhai
This paper focuses on the task of quality enhancement for compressed videos.
1 code implementation • 28 Apr 2024 • ZiCheng Zhang, HaoNing Wu, Yingjie Zhou, Chunyi Li, Wei Sun, Chaofeng Chen, Xiongkuo Min, Xiaohong Liu, Weisi Lin, Guangtao Zhai
Although large multi-modality models (LMMs) have seen extensive exploration and application in various quality assessment studies, their integration into Point Cloud Quality Assessment (PCQA) remains unexplored.
1 code implementation • 27 Apr 2024 • Puyi Wang, Wei Sun, ZiCheng Zhang, Jun Jia, Yanwei Jiang, Zhichao Zhang, Xiongkuo Min, Guangtao Zhai
Traditional deep neural network (DNN)-based image quality assessment (IQA) models leverage convolutional neural networks (CNN) or Transformer to learn the quality-aware feature representation, achieving commendable performance on natural scene images.
no code implementations • 25 Apr 2024 • Xiaohong Liu, Xiongkuo Min, Guangtao Zhai, Chunyi Li, Tengchuan Kou, Wei Sun, HaoNing Wu, Yixuan Gao, Yuqin Cao, ZiCheng Zhang, Xiele Wu, Radu Timofte, Fei Peng, Huiyuan Fu, Anlong Ming, Chuanming Wang, Huadong Ma, Shuai He, Zifei Dou, Shu Chen, Huacong Zhang, Haiyi Xie, Chengwei Wang, Baoying Chen, Jishen Zeng, Jianquan Yang, Weigang Wang, Xi Fang, Xiaoxin Lv, Jun Yan, Tianwu Zhi, Yabin Zhang, Yaohui Li, Yang Li, Jingwen Xu, Jianzhao Liu, Yiting Liao, Junlin Li, Zihao Yu, Yiting Lu, Xin Li, Hossein Motamednia, S. Farhad Hosseini-Benvidi, Fengbin Guan, Ahmad Mahmoudi-Aznaveh, Azadeh Mansouri, Ganzorig Gankhuyag, Kihwan Yoon, Yifang Xu, Haotian Fan, Fangyuan Kong, Shiling Zhao, Weifeng Dong, Haibing Yin, Li Zhu, Zhiling Wang, Bingchen Huang, Avinab Saha, Sandeep Mishra, Shashank Gupta, Rajesh Sureddi, Oindrila Saha, Luigi Celona, Simone Bianco, Paolo Napoletano, Raimondo Schettini, Junfeng Yang, Jing Fu, Wei zhang, Wenzhi Cao, Limei Liu, Han Peng, Weijun Yuan, Zhan Li, Yihang Cheng, Yifan Deng, Haohui Li, Bowen Qu, Yao Li, Shuqing Luo, Shunzhou Wang, Wei Gao, Zihao Lu, Marcos V. Conde, Xinrui Wang, Zhibo Chen, Ruling Liao, Yan Ye, Qiulin Wang, Bing Li, Zhaokun Zhou, Miao Geng, Rui Chen, Xin Tao, Xiaoyu Liang, Shangkun Sun, Xingyuan Ma, Jiaze Li, Mengduo Yang, Haoran Xu, Jie zhou, Shiding Zhu, Bohan Yu, Pengfei Chen, Xinrui Xu, Jiabin Shen, Zhichao Duan, Erfan Asadi, Jiahe Liu, Qi Yan, Youran Qu, Xiaohui Zeng, Lele Wang, Renjie Liao
A total of 196 participants have registered in the video track.
1 code implementation • 24 Apr 2024 • Marcos V. Conde, Saman Zadtootaghaj, Nabajeet Barman, Radu Timofte, Chenlong He, Qi Zheng, Ruoxi Zhu, Zhengzhong Tu, Haiqiang Wang, Xiangguang Chen, Wenhui Meng, Xiang Pan, Huiying Shi, Han Zhu, Xiaozhong Xu, Lei Sun, Zhenzhong Chen, Shan Liu, ZiCheng Zhang, HaoNing Wu, Yingjie Zhou, Chunyi Li, Xiaohong Liu, Weisi Lin, Guangtao Zhai, Wei Sun, Yuqin Cao, Yanwei Jiang, Jun Jia, Zhichao Zhang, Zijian Chen, Weixia Zhang, Xiongkuo Min, Steve Göring, Zihao Qi, Chen Feng
The performance of the top-5 submissions is reviewed and provided here as a survey of diverse deep models for efficient video quality assessment of user-generated content.
1 code implementation • 17 Apr 2024 • Xin Li, Kun Yuan, Yajing Pei, Yiting Lu, Ming Sun, Chao Zhou, Zhibo Chen, Radu Timofte, Wei Sun, HaoNing Wu, ZiCheng Zhang, Jun Jia, Zhichao Zhang, Linhan Cao, Qiubo Chen, Xiongkuo Min, Weisi Lin, Guangtao Zhai, Jianhui Sun, Tianyi Wang, Lei LI, Han Kong, Wenxuan Wang, Bing Li, Cheng Luo, Haiqiang Wang, Xiangguang Chen, Wenhui Meng, Xiang Pan, Huiying Shi, Han Zhu, Xiaozhong Xu, Lei Sun, Zhenzhong Chen, Shan Liu, Fangyuan Kong, Haotian Fan, Yifang Xu, Haoran Xu, Mengduo Yang, Jie zhou, Jiaze Li, Shijie Wen, Mai Xu, Da Li, Shunyu Yao, Jiazhi Du, WangMeng Zuo, Zhibo Li, Shuai He, Anlong Ming, Huiyuan Fu, Huadong Ma, Yong Wu, Fie Xue, Guozhi Zhao, Lina Du, Jie Guo, Yu Zhang, huimin zheng, JunHao Chen, Yue Liu, Dulan Zhou, Kele Xu, Qisheng Xu, Tao Sun, Zhixiang Ding, Yuhang Hu
This paper reviews the NTIRE 2024 Challenge on Shortform UGC Video Quality Assessment (S-UGC VQA), where various excellent solutions are submitted and evaluated on the collected dataset KVQ from popular short-form video platform, i. e., Kuaishou/Kwai Platform.
1 code implementation • 13 Apr 2024 • Yingjie Zhou, ZiCheng Zhang, Wei Sun, Xiaohong Liu, Xiongkuo Min, Zhihua Wang, Xiao-Ping Zhang, Guangtao Zhai
In the realm of media technology, digital humans have gained prominence due to rapid advancements in computer technology.
no code implementations • 11 Apr 2024 • Yinan Sun, Xiongkuo Min, Huiyu Duan, Guangtao Zhai
Finally, considering the effect of text descriptions on visual attention, while most existing saliency models ignore this impact, we further propose a text-guided saliency (TGSal) prediction model, which extracts and integrates both image features and text features to predict the image saliency under various text-description conditions.
no code implementations • 4 Apr 2024 • Chunyi Li, Tengchuan Kou, Yixuan Gao, Yuqin Cao, Wei Sun, ZiCheng Zhang, Yingjie Zhou, Zhichao Zhang, Weixia Zhang, HaoNing Wu, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai
With the rapid advancements in AI-Generated Content (AIGC), AI-Generated Images (AIGIs) have been widely applied in entertainment, education, and social media.
1 code implementation • 1 Apr 2024 • Liu Yang, Huiyu Duan, Long Teng, Yucheng Zhu, Xiaohong Liu, Menghan Hu, Xiongkuo Min, Guangtao Zhai, Patrick Le Callet
Finally, we conduct a benchmark experiment to evaluate the performance of state-of-the-art IQA models on our database.
1 code implementation • 18 Mar 2024 • Tengchuan Kou, Xiaohong Liu, ZiCheng Zhang, Chunyi Li, HaoNing Wu, Xiongkuo Min, Guangtao Zhai, Ning Liu
Based on T2VQA-DB, we propose a novel transformer-based model for subjective-aligned Text-to-Video Quality Assessment (T2VQA).
no code implementations • 5 Feb 2024 • Xiongkuo Min, Huiyu Duan, Wei Sun, Yucheng Zhu, Guangtao Zhai
Perceptual video quality assessment plays a vital role in the field of video processing due to the existence of quality degradations introduced in various stages of video signal acquisition, compression, transmission and display.
1 code implementation • 2 Jan 2024 • Chunyi Li, HaoNing Wu, ZiCheng Zhang, Hongkun Hao, Kaiwei Zhang, Lei Bai, Xiaohong Liu, Xiongkuo Min, Weisi Lin, Guangtao Zhai
With the rapid evolution of the Text-to-Image (T2I) model in recent years, their unsatisfactory generation result has become a challenge.
1 code implementation • 28 Dec 2023 • HaoNing Wu, ZiCheng Zhang, Weixia Zhang, Chaofeng Chen, Liang Liao, Chunyi Li, Yixuan Gao, Annan Wang, Erli Zhang, Wenxiu Sun, Qiong Yan, Xiongkuo Min, Guangtao Zhai, Weisi Lin
The explosion of visual content available online underscores the requirement for an accurate machine assessor to robustly evaluate scores across diverse types of visual contents.
Ranked #1 on Video Quality Assessment on LIVE-FB LSVQ
no code implementations • 25 Dec 2023 • Jinliang Han, Xiongkuo Min, Yixuan Gao, Jun Jia, Lei Sun, Zuowei Cao, Yonglin Luo, Guangtao Zhai
To evaluate the quality of VFI frames without reference videos, a no-reference perceptual quality assessment method is proposed in this paper.
no code implementations • 23 Dec 2023 • ZiCheng Zhang, HaoNing Wu, Zhongpeng Ji, Chunyi Li, Erli Zhang, Wei Sun, Xiaohong Liu, Xiongkuo Min, Fengyu Sun, Shangling Jui, Weisi Lin, Guangtao Zhai
Recent advancements in Multi-modality Large Language Models (MLLMs) have demonstrated remarkable capabilities in complex high-level vision tasks.
1 code implementation • 9 Dec 2023 • Zijian Chen, Wei Sun, HaoNing Wu, ZiCheng Zhang, Jun Jia, Zhongpeng Ji, Fengyu Sun, Shangling Jui, Xiongkuo Min, Guangtao Zhai, Wenjun Zhang
In this paper, we take the first step to benchmark and assess the visual naturalness of AI-generated images.
no code implementations • 30 Nov 2023 • Zijian Chen, Wei Sun, ZiCheng Zhang, Ru Huang, Fangfang Lu, Xiongkuo Min, Guangtao Zhai, Wenjun Zhang
Banding artifact, as known as staircase-like contour, is a common quality annoyance that happens in compression, transmission, etc.
1 code implementation • 29 Nov 2023 • Zijian Chen, Wei Sun, Jun Jia, Fangfang Lu, ZiCheng Zhang, Jing Liu, Ru Huang, Xiongkuo Min, Guangtao Zhai
The quality score of a banding image is generated by pooling the banding detection maps masked by the spatial frequency filters.
no code implementations • 9 Nov 2023 • Yuxin Zhu, Xilei Zhu, Huiyu Duan, Jie Li, Kaiwei Zhang, Yucheng Zhu, Li Chen, Xiongkuo Min, Guangtao Zhai
Visual saliency prediction for omnidirectional videos (ODVs) has shown great significance and necessity for omnidirectional videos to help ODV coding, ODV transmission, ODV rendering, etc..
no code implementations • 26 Oct 2023 • ZiCheng Zhang, Yingjie Zhou, Wei Sun, Xiongkuo Min, Guangtao Zhai
Point clouds are widely used in 3D content representation and have various applications in multimedia.
no code implementations • 25 Oct 2023 • Yingjie Zhou, ZiCheng Zhang, Wei Sun, Xiongkuo Min, Xianghe Ma, Guangtao Zhai
In this paper, we develop a novel no-reference (NR) method based on Transformer to deal with DHQA in a multi-task manner.
no code implementations • 24 Oct 2023 • ZiCheng Zhang, Yingjie Zhou, Wei Sun, Xiongkuo Min, Guangtao Zhai
Usually, DDHs are displayed as 2D rendered animation videos and it is natural to adapt video quality assessment (VQA) methods to DDH quality assessment (DDH-QA) tasks.
no code implementations • 26 Aug 2023 • Danyang Tu, Wei Shen, Wei Sun, Xiongkuo Min, Guangtao Zhai
In contrast, we reframe the gaze following detection task as detecting human head locations and their gaze followings simultaneously, aiming at jointly detect human gaze location and gaze object in a unified and single-stage pipeline.
1 code implementation • 9 Aug 2023 • Tengchuan Kou, Xiaohong Liu, Wei Sun, Jun Jia, Xiongkuo Min, Guangtao Zhai, Ning Liu
Indeed, most existing quality assessment models evaluate video quality as a whole without specifically taking the subjective experience of video stability into consideration.
1 code implementation • 26 Jul 2023 • Wei Sun, Wen Wen, Xiongkuo Min, Long Lan, Guangtao Zhai, Kede Ma
By minimalistic, we restrict our family of BVQA models to build only upon basic blocks: a video preprocessor (for aggressive spatiotemporal downsampling), a spatial quality analyzer, an optional temporal quality analyzer, and a quality regressor, all with the simplest possible instantiations.
1 code implementation • 20 Jul 2023 • Xilei Zhu, Huiyu Duan, Yuqin Cao, Yuxin Zhu, Yucheng Zhu, Jing Liu, Li Chen, Xiongkuo Min, Guangtao Zhai
Omnidirectional videos (ODVs) play an increasingly important role in the application fields of medical, education, advertising, tourism, etc.
no code implementations • 19 Jul 2023 • Xiaohong Liu, Xiongkuo Min, Wei Sun, Yulun Zhang, Kai Zhang, Radu Timofte, Guangtao Zhai, Yixuan Gao, Yuqin Cao, Tengchuan Kou, Yunlong Dong, Ziheng Jia, Yilin Li, Wei Wu, Shuming Hu, Sibin Deng, Pengxiang Xiao, Ying Chen, Kai Li, Kai Zhao, Kun Yuan, Ming Sun, Heng Cong, Hao Wang, Lingzhi Fu, Yusheng Zhang, Rongyu Zhang, Hang Shi, Qihang Xu, Longan Xiao, Zhiliang Ma, Mirko Agarla, Luigi Celona, Claudio Rota, Raimondo Schettini, Zhiwei Huang, Yanan Li, Xiaotao Wang, Lei Lei, Hongye Liu, Wei Hong, Ironhead Chuang, Allen Lin, Drake Guan, Iris Chen, Kae Lou, Willy Huang, Yachun Tasi, Yvonne Kao, Haotian Fan, Fangyuan Kong, Shiqi Zhou, Hao liu, Yu Lai, Shanshan Chen, Wenqi Wang, HaoNing Wu, Chaofeng Chen, Chunzheng Zhu, Zekun Guo, Shiling Zhao, Haibing Yin, Hongkui Wang, Hanene Brachemi Meftah, Sid Ahmed Fezza, Wassim Hamidouche, Olivier Déforges, Tengfei Shi, Azadeh Mansouri, Hossein Motamednia, Amir Hossein Bakhtiari, Ahmad Mahmoudi Aznaveh
61 participating teams submitted their prediction results during the development phase, with a total of 3168 submissions.
1 code implementation • IEEE Transactions on Circuits and Systems for Video Technology 2023 • Yixuan Gao, Xiongkuo Min, Yucheng Zhu, Xiao-Ping Zhang, Guangtao Zhai
On the other hand, we also prove the feasibility of the proposed method in predicting the MOS of image quality on several popular IQA databases, including CSIQ, TID2013, LIVE MD, and LIVE Challenge.
1 code implementation • IEEE Transactions on Image Processing 2023 • Yuqin Cao, Xiongkuo Min, Wei Sun, Guangtao Zhai
Then, to facilitate the development of AVQA fields, we construct a benchmark of AVQA models on the proposed SJTU-UAV database and other two AVQA databases, of which the benchmark models consist of AVQA models designed for synthetically distorted A/V sequences and AVQA models built through combining the popular VQA methods and audio features via support vector regressor (SVR).
1 code implementation • 6 Jul 2023 • ZiCheng Zhang, Wei Sun, Yingjie Zhou, HaoNing Wu, Chunyi Li, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, Weisi Lin
To address this gap, we propose SJTU-H3D, a subjective quality assessment database specifically designed for full-body digital humans.
1 code implementation • 1 Jul 2023 • Jiarui Wang, Huiyu Duan, Jing Liu, Shi Chen, Xiongkuo Min, Guangtao Zhai
In this paper, in order to get a better understanding of the human visual preferences for AIGIs, a large-scale IQA database for AIGC is established, which is named as AIGCIQA2023.
1 code implementation • 9 Jun 2023 • ZiCheng Zhang, Wei Sun, Houning Wu, Yingjie Zhou, Chunyi Li, Xiongkuo Min, Guangtao Zhai, Weisi Lin
Model-based 3DQA methods extract features directly from the 3D models, which are characterized by their high degree of complexity.
1 code implementation • 7 Jun 2023 • Chunyi Li, ZiCheng Zhang, HaoNing Wu, Wei Sun, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, Weisi Lin
With the rapid advancements of the text-to-image generative model, AI-generated images (AGIs) have been widely applied to entertainment, education, social media, etc.
1 code implementation • 30 Mar 2023 • Huiyu Duan, Wei Shen, Xiongkuo Min, Danyang Tu, Long Teng, Jia Wang, Guangtao Zhai
Recently, masked autoencoders (MAE) for feature pre-training have further unleashed the potential of Transformers, leading to state-of-the-art performances on various high-level vision tasks.
Ranked #4 on Image Defocus Deblurring on DPD (Dual-view)
1 code implementation • CVPR 2023 • ZiCheng Zhang, Wei Wu, Wei Sun, Dangyang Tu, Wei Lu, Xiongkuo Min, Ying Chen, Guangtao Zhai
User-generated content (UGC) live videos are often bothered by various distortions during capture procedures and thus exhibit diverse visual qualities.
1 code implementation • 22 Mar 2023 • ZiCheng Zhang, Chunyi Li, Wei Sun, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai
\underline{AI} \underline{G}enerated \underline{C}ontent (\textbf{AIGC}) has gained widespread attention with the increasing efficiency of deep learning in content creation.
1 code implementation • 16 Mar 2023 • Yixuan Gao, Yuqin Cao, Tengchuan Kou, Wei Sun, Yunlong Dong, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai
Few researchers have specifically proposed a video quality assessment method for video enhancement, and there is also no comprehensive video quality assessment dataset available in public.
1 code implementation • 14 Mar 2023 • ZiCheng Zhang, Wei Sun, Yingjie Zhou, Jun Jia, Zhichao Zhang, Jing Liu, Xiongkuo Min, Guangtao Zhai
Computer graphics images (CGIs) are artificially generated by means of computer programs and are widely perceived under various scenarios, such as games, streaming media, etc.
no code implementations • 4 Mar 2023 • Yuqin Cao, Xiongkuo Min, Wei Sun, XiaoPing Zhang, Guangtao Zhai
Specifically, we construct the first UGC AVQA database named the SJTU-UAV database, which includes 520 in-the-wild UGC audio and video (A/V) sequences, and conduct a user study to obtain the mean opinion scores of the A/V sequences.
no code implementations • 17 Feb 2023 • ZiCheng Zhang, Wei Sun, Yingjie Zhou, Wei Lu, Yucheng Zhu, Xiongkuo Min, Guangtao Zhai
Currently, great numbers of efforts have been put into improving the effectiveness of 3D model quality assessment (3DQA) methods.
1 code implementation • 24 Dec 2022 • ZiCheng Zhang, Yingjie Zhou, Wei Sun, Wei Lu, Xiongkuo Min, Yu Wang, Guangtao Zhai
In recent years, large amounts of effort have been put into pushing forward the real-world application of dynamic digital human (DDH).
1 code implementation • 3 Oct 2022 • Weixia Zhang, Dingquan Li, Xiongkuo Min, Guangtao Zhai, Guodong Guo, Xiaokang Yang, Kede Ma
No-reference image quality assessment (NR-IQA) aims to quantify how humans perceive visual distortions of digital images without access to their undistorted references.
1 code implementation • 20 Sep 2022 • ZiCheng Zhang, Yingjie Zhou, Wei Sun, Xiongkuo Min, Yuzhe Wu, Guangtao Zhai
Digital humans are attracting more and more research interest during the last decade, the generation, representation, rendering, and animation of which have been put into large amounts of effort.
1 code implementation • 1 Sep 2022 • ZiCheng Zhang, Wei Sun, Xiongkuo Min, Quan Zhou, Jun He, Qiyuan Wang, Guangtao Zhai
In specific, we split the point clouds into sub-models to represent local geometry distortions such as point shift and down-sampling.
Ranked #1 on Point Cloud Quality Assessment on SJTU-PCQA
1 code implementation • 30 Aug 2022 • ZiCheng Zhang, Wei Sun, Yucheng Zhu, Xiongkuo Min, Wei Wu, Ying Chen, Guangtao Zhai
To tackle the challenge of point cloud quality assessment (PCQA), many PCQA methods have been proposed to evaluate the visual quality levels of point clouds by assessing the rendered static 2D projections.
no code implementations • 6 Jul 2022 • Huiyu Duan, Guangtao Zhai, Xiongkuo Min, Yucheng Zhu, Yi Fang, Xiaokang Yang
The original and distorted omnidirectional images, subjective quality ratings, and the head and eye movement data together constitute the OIQA database.
no code implementations • 10 Jun 2022 • Tao Wang, ZiCheng Zhang, Wei Sun, Xiongkuo Min, Wei Lu, Guangtao Zhai
However, limited work has been put forward to tackle the problem of computer graphics generated images' quality assessment (CG-IQA).
no code implementations • 9 Jun 2022 • Yu Fan, ZiCheng Zhang, Wei Sun, Xiongkuo Min, Wei Lu, Tao Wang, Ning Liu, Guangtao Zhai
Point cloud is one of the most widely used digital formats of 3D models, the visual quality of which is quite sensitive to distortions such as downsampling, noise, and compression.
no code implementations • 9 Jun 2022 • Wei Lu, Wei Sun, Wenhan Zhu, Xiongkuo Min, ZiCheng Zhang, Tao Wang, Guangtao Zhai
In this paper, we first conduct an example experiment (i. e. the face detection task) to demonstrate that the quality of the SIs has a crucial impact on the performance of the IVSS, and then propose a saliency-based deep neural network for the blind quality assessment of the SIs, which helps IVSS to filter the low-quality SIs and improve the detection and recognition performance.
no code implementations • 9 Jun 2022 • Wei Lu, Wei Sun, Xiongkuo Min, Wenhan Zhu, Quan Zhou, Jun He, Qiyuan Wang, ZiCheng Zhang, Tao Wang, Guangtao Zhai
In this paper, we propose a deep learning-based BIQA model for 4K content, which on one hand can recognize true and pseudo 4K content and on the other hand can evaluate their perceptual visual quality.
no code implementations • 9 Jun 2022 • ZiCheng Zhang, Wei Sun, Xiongkuo Min, Wenhan Zhu, Tao Wang, Wei Lu, Guangtao Zhai
Therefore, in this paper, we propose a no-reference deep-learning image quality assessment method based on frequency maps because the artifacts caused by SISR algorithms are quite sensitive to frequency information.
no code implementations • 8 Jun 2022 • ZiCheng Zhang, Wei Sun, Wei Wu, Ying Chen, Xiongkuo Min, Guangtao Zhai
Nowadays, the mainstream full-reference (FR) metrics are effective to predict the quality of compressed images at coarse-grained levels (the bit rates differences of compressed images are obvious), however, they may perform poorly for fine-grained compressed images whose bit rates differences are quite subtle.
no code implementations • 4 Jun 2022 • Danyang Tu, Wei Sun, Xiongkuo Min, Guangtao Zhai, Wei Shen
We present a novel vision Transformer, named TUTOR, which is able to learn tubelet tokens, served as highly-abstracted spatiotemporal representations, for video-based human-object interaction (V-HOI) detection.
no code implementations • 12 May 2022 • Qiuping Jiang, Jiawu Xu, Yudong Mao, Wei Zhou, Xiongkuo Min, Guangtao Zhai
The DDB-Net contains three modules, i. e., an image decomposition module, a feature encoding module, and a bilinear pooling module.
1 code implementation • 29 Apr 2022 • Wei Sun, Xiongkuo Min, Wei Lu, Guangtao Zhai
The proposed model utilizes very sparse frames to extract spatial features and dense frames (i. e. the video chunk) with a very low spatial resolution to extract motion features, which thereby has low computational complexity.
Ranked #5 on Video Quality Assessment on YouTube-UGC
1 code implementation • 18 Apr 2022 • Huiyu Duan, Wei Shen, Xiongkuo Min, Danyang Tu, Jing Li, Guangtao Zhai
Therefore, in this paper, we mainly analyze the interaction effect between background (BG) scenes and AR contents, and study the saliency prediction problem in AR.
1 code implementation • 11 Apr 2022 • Huiyu Duan, Xiongkuo Min, Yucheng Zhu, Guangtao Zhai, Xiaokang Yang, Patrick Le Callet
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
no code implementations • CVPR 2022 • Danyang Tu, Xiongkuo Min, Huiyu Duan, Guodong Guo, Guangtao Zhai, Wei Shen
In contrast, we redefine the HGT detection task as detecting human head locations and their gaze targets, simultaneously.
no code implementations • 20 Mar 2022 • Danyang Tu, Xiongkuo Min, Huiyu Duan, Guodong Guo, Guangtao Zhai, Wei Shen
Iwin Transformer is a hierarchical Transformer which progressively performs token representation learning and token agglomeration within irregular windows.
no code implementations • 2 Mar 2022 • Yixuan Gao, Xiongkuo Min, Wenhan Zhu, Xiao-Ping Zhang, Guangtao Zhai
Experimental results verifythe feasibility of using alpha stable model to describe the IQSD, and prove the effectiveness of objective alpha stable model basedIQSD prediction method.
no code implementations • 25 Feb 2022 • Wei Zhou, Xiongkuo Min, Hong Li, Qiuping Jiang
In this paper, we provide a brief overview of adaptive video streaming quality assessment.
no code implementations • CVPR 2022 • Jun Jia, Zhongpai Gao, Dandan Zhu, Xiongkuo Min, Guangtao Zhai, Xiaokang Yang
In addition, the automatic localization of hidden codes significantly reduces the time of manually correcting geometric distortions for photos, which is a revolutionary innovation for information hiding in mobile applications.
1 code implementation • ICCV 2021 • Yuan Tian, Guo Lu, Xiongkuo Min, Zhaohui Che, Guangtao Zhai, Guodong Guo, Zhiyong Gao
After optimization, the downscaled video by our framework preserves more meaningful information, which is beneficial for both the upscaling step and the downstream tasks, e. g., video action recognition task.
2 code implementations • 5 Jul 2021 • ZiCheng Zhang, Wei Sun, Xiongkuo Min, Tao Wang, Wei Lu, Guangtao Zhai
Therefore, many related studies such as point cloud quality assessment (PCQA) and mesh quality assessment (MQA) have been carried out to measure the visual quality degradations of 3D models.
Ranked #3 on Point Cloud Quality Assessment on WPC
1 code implementation • 2 Jun 2021 • Wei Sun, Tao Wang, Xiongkuo Min, Fuwang Yi, Guangtao Zhai
The proposed VQA framework consists of three modules, the feature extraction module, the quality regression module, and the quality pooling module.
Ranked #12 on Video Quality Assessment on MSU NR VQA Database
1 code implementation • CVPR 2020 • Wang Shen, Wenbo Bao, Guangtao Zhai, Li Chen, Xiongkuo Min, Zhiyong Gao
Existing works reduce motion blur and up-convert frame rate through two separate ways, including frame deblurring and frame interpolation.
no code implementations • 12 Dec 2019 • Yucheng Zhu, Xiongkuo Min, Dandan Zhu, Ke Gu, Jiantao Zhou, Guangtao Zhai, Xiaokang Yang, Wenjun Zhang
The saliency annotations of head and eye movements for both original and augmented videos are collected and together constitute the ARVR dataset.
1 code implementation • 16 May 2019 • Zhaohui Che, Ali Borji, Guangtao Zhai, Xiongkuo Min, Guodong Guo, Patrick Le Callet
Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time consuming and expensive.
1 code implementation • 10 Oct 2018 • Zhaohui Che, Ali Borji, Guangtao Zhai, Xiongkuo Min
Most of current studies on human gaze and saliency modeling have used high-quality stimuli.
no code implementations • 12 Jul 2017 • Menghan Hu, Xiongkuo Min, Guangtao Zhai, Wenhan Zhu, Yucheng Zhu, Zhaodi Wang, Xiaokang Yang, Guang Tian
Subsequently, the existing no-reference IQA algorithms, which were 5 opinion-aware approaches viz., NFERM, GMLF, DIIVINE, BRISQUE and BLIINDS2, and 8 opinion-unaware approaches viz., QAC, SISBLIM, NIQE, FISBLIM, CPBD, S3 and Fish_bb, were executed for the evaluation of the THz security image quality.