no code implementations • ECCV 2020 • Yuan Tian, Zhaohui Che, Wenbo Bao, Guangtao Zhai, Zhiyong Gao
Motion representation is key to many computer vision problems but has never been well studied in the literature.
1 code implementation • 12 Nov 2023 • HaoNing Wu, ZiCheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Kaixin Xu, Chunyi Li, Jingwen Hou, Guangtao Zhai, Geng Xue, Wenxiu Sun, Qiong Yan, Weisi Lin
Multi-modality foundation models, as represented by GPT-4V, have brought a new paradigm for low-level visual perception and understanding tasks, that can respond to a broad range of natural human instructions in a model.
no code implementations • 9 Nov 2023 • Yuxin Zhu, Xilei Zhu, Huiyu Duan, Jie Li, Kaiwei Zhang, Yucheng Zhu, Li Chen, Xiongkuo Min, Guangtao Zhai
Visual saliency prediction for omnidirectional videos (ODVs) has shown great significance and necessity for omnidirectional videos to help ODV coding, ODV transmission, ODV rendering, etc..
no code implementations • 26 Oct 2023 • ZiCheng Zhang, Yingjie Zhou, Wei Sun, Xiongkuo Min, Guangtao Zhai
Point clouds are widely used in 3D content representation and have various applications in multimedia.
no code implementations • 25 Oct 2023 • Yingjie Zhou, ZiCheng Zhang, Wei Sun, Xiongkuo Min, Xianghe Ma, Guangtao Zhai
In this paper, we develop a novel no-reference (NR) method based on Transformer to deal with DHQA in a multi-task manner.
no code implementations • 24 Oct 2023 • ZiCheng Zhang, Yingjie Zhou, Wei Sun, Xiongkuo Min, Guangtao Zhai
Usually, DDHs are displayed as 2D rendered animation videos and it is natural to adapt video quality assessment (VQA) methods to DDH quality assessment (DDH-QA) tasks.
2 code implementations • 25 Sep 2023 • HaoNing Wu, ZiCheng Zhang, Erli Zhang, Chaofeng Chen, Liang Liao, Annan Wang, Chunyi Li, Wenxiu Sun, Qiong Yan, Guangtao Zhai, Weisi Lin
To address this gap, we present Q-Bench, a holistic benchmark crafted to systematically evaluate potential abilities of MLLMs on three realms: low-level visual perception, low-level visual description, and overall visual quality assessment.
no code implementations • 26 Aug 2023 • Danyang Tu, Wei Shen, Wei Sun, Xiongkuo Min, Guangtao Zhai
In contrast, we reframe the gaze following detection task as detecting human head locations and their gaze followings simultaneously, aiming at jointly detect human gaze location and gaze object in a unified and single-stage pipeline.
1 code implementation • ICCV 2023 • Guangyang Wu, Xiaohong Liu, Kunming Luo, Xi Liu, Qingqing Zheng, Shuaicheng Liu, Xinyang Jiang, Guangtao Zhai, Wenyi Wang
To train and evaluate the proposed AccFlow, we have constructed a large-scale high-quality dataset named CVO, which provides ground-truth optical flow labels between adjacent and distant frames.
no code implementations • ICCV 2023 • Danyang Tu, Wei Sun, Guangtao Zhai, Wei Shen
We propose an agglomerative Transformer (AGER) that enables Transformer-based human-object interaction (HOI) detectors to flexibly exploit extra instance-level cues in a single-stage and end-to-end manner for the first time.
1 code implementation • 9 Aug 2023 • Tengchuan Kou, Xiaohong Liu, Wei Sun, Jun Jia, Xiongkuo Min, Guangtao Zhai, Ning Liu
Indeed, most existing quality assessment models evaluate video quality as a whole without specifically taking the subjective experience of video stability into consideration.
no code implementations • 28 Jul 2023 • Kang Fu, Xiaohong Liu, Jun Jia, ZiCheng Zhang, Yicong Peng, Jia Wang, Guangtao Zhai
To achieve end-to-end training of the framework, we integrate a neural network that simulates the ISP pipeline to handle the RAW-to-RGB conversion process.
no code implementations • 26 Jul 2023 • Wei Sun, Wen Wen, Xiongkuo Min, Long Lan, Guangtao Zhai, Kede Ma
By minimalistic, we restrict our family of BVQA models to build only upon basic blocks: a video preprocessor (for aggressive spatiotemporal downsampling), a spatial quality analyzer, an optional temporal quality analyzer, and a quality regressor, all with the simplest possible instantiations.
1 code implementation • 20 Jul 2023 • Xilei Zhu, Huiyu Duan, Yuqin Cao, Yuxin Zhu, Yucheng Zhu, Jing Liu, Li Chen, Xiongkuo Min, Guangtao Zhai
Omnidirectional videos (ODVs) play an increasingly important role in the application fields of medical, education, advertising, tourism, etc.
no code implementations • 19 Jul 2023 • Xiaohong Liu, Xiongkuo Min, Wei Sun, Yulun Zhang, Kai Zhang, Radu Timofte, Guangtao Zhai, Yixuan Gao, Yuqin Cao, Tengchuan Kou, Yunlong Dong, Ziheng Jia, Yilin Li, Wei Wu, Shuming Hu, Sibin Deng, Pengxiang Xiao, Ying Chen, Kai Li, Kai Zhao, Kun Yuan, Ming Sun, Heng Cong, Hao Wang, Lingzhi Fu, Yusheng Zhang, Rongyu Zhang, Hang Shi, Qihang Xu, Longan Xiao, Zhiliang Ma, Mirko Agarla, Luigi Celona, Claudio Rota, Raimondo Schettini, Zhiwei Huang, Yanan Li, Xiaotao Wang, Lei Lei, Hongye Liu, Wei Hong, Ironhead Chuang, Allen Lin, Drake Guan, Iris Chen, Kae Lou, Willy Huang, Yachun Tasi, Yvonne Kao, Haotian Fan, Fangyuan Kong, Shiqi Zhou, Hao liu, Yu Lai, Shanshan Chen, Wenqi Wang, HaoNing Wu, Chaofeng Chen, Chunzheng Zhu, Zekun Guo, Shiling Zhao, Haibing Yin, Hongkui Wang, Hanene Brachemi Meftah, Sid Ahmed Fezza, Wassim Hamidouche, Olivier Déforges, Tengfei Shi, Azadeh Mansouri, Hossein Motamednia, Amir Hossein Bakhtiari, Ahmad Mahmoudi Aznaveh
61 participating teams submitted their prediction results during the development phase, with a total of 3168 submissions.
1 code implementation • 6 Jul 2023 • ZiCheng Zhang, Wei Sun, Yingjie Zhou, HaoNing Wu, Chunyi Li, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, Weisi Lin
To address this gap, we propose SJTU-H3D, a subjective quality assessment database specifically designed for full-body digital humans.
1 code implementation • 1 Jul 2023 • Jiarui Wang, Huiyu Duan, Jing Liu, Shi Chen, Xiongkuo Min, Guangtao Zhai
In this paper, in order to get a better understanding of the human visual preferences for AIGIs, a large-scale IQA database for AIGC is established, which is named as AIGCIQA2023.
1 code implementation • 9 Jun 2023 • ZiCheng Zhang, Wei Sun, Houning Wu, Yingjie Zhou, Chunyi Li, Xiongkuo Min, Guangtao Zhai, Weisi Lin
Model-based 3DQA methods extract features directly from the 3D models, which are characterized by their high degree of complexity.
1 code implementation • 7 Jun 2023 • Chunyi Li, ZiCheng Zhang, HaoNing Wu, Wei Sun, Xiongkuo Min, Xiaohong Liu, Guangtao Zhai, Weisi Lin
With the rapid advancements of the text-to-image generative model, AI-generated images (AGIs) have been widely applied to entertainment, education, social media, etc.
1 code implementation • 16 May 2023 • Yunlong Dong, Xiaohong Liu, Yixuan Gao, Xunchu Zhou, Tao Tan, Guangtao Zhai
To this end, we first construct a Low-Light Video Enhancement Quality Assessment (LLVE-QA) dataset in which 254 original low-light videos are collected and then enhanced by leveraging 8 LLVE algorithms to obtain 2, 060 videos in total.
no code implementations • CVPR 2023 • Sijing Wu, Yichao Yan, Yunhao Li, Yuhao Cheng, Wenhan Zhu, Ke Gao, Xiaobo Li, Guangtao Zhai
To bring digital avatars into people's lives, it is highly demanded to efficiently generate complete, realistic, and animatable head avatars.
1 code implementation • 30 Mar 2023 • Huiyu Duan, Wei Shen, Xiongkuo Min, Danyang Tu, Long Teng, Jia Wang, Guangtao Zhai
Recently, masked autoencoders (MAE) for feature pre-training have further unleashed the potential of Transformers, leading to state-of-the-art performances on various high-level vision tasks.
Ranked #4 on
Image Defocus Deblurring
on DPD (Dual-view)
1 code implementation • CVPR 2023 • Weixia Zhang, Guangtao Zhai, Ying WEI, Xiaokang Yang, Kede Ma
We aim at advancing blind image quality assessment (BIQA), which predicts the human perception of image quality without any reference information.
1 code implementation • CVPR 2023 • ZiCheng Zhang, Wei Wu, Wei Sun, Dangyang Tu, Wei Lu, Xiongkuo Min, Ying Chen, Guangtao Zhai
User-generated content (UGC) live videos are often bothered by various distortions during capture procedures and thus exhibit diverse visual qualities.
1 code implementation • 22 Mar 2023 • ZiCheng Zhang, Chunyi Li, Wei Sun, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai
\underline{AI} \underline{G}enerated \underline{C}ontent (\textbf{AIGC}) has gained widespread attention with the increasing efficiency of deep learning in content creation.
1 code implementation • 16 Mar 2023 • Yixuan Gao, Yuqin Cao, Tengchuan Kou, Wei Sun, Yunlong Dong, Xiaohong Liu, Xiongkuo Min, Guangtao Zhai
Few researchers have specifically proposed a video quality assessment method for video enhancement, and there is also no comprehensive video quality assessment dataset available in public.
1 code implementation • 14 Mar 2023 • ZiCheng Zhang, Wei Sun, Yingjie Zhou, Jun Jia, Zhichao Zhang, Jing Liu, Xiongkuo Min, Guangtao Zhai
Computer graphics images (CGIs) are artificially generated by means of computer programs and are widely perceived under various scenarios, such as games, streaming media, etc.
no code implementations • 11 Mar 2023 • Junwen Xiong, Ganglai Wang, Peng Zhang, Wei Huang, Yufei zha, Guangtao Zhai
Incorporating the audio stream enables Video Saliency Prediction (VSP) to imitate the selective attention mechanism of human brain.
no code implementations • 4 Mar 2023 • Yuqin Cao, Xiongkuo Min, Wei Sun, XiaoPing Zhang, Guangtao Zhai
Specifically, we construct the first UGC AVQA database named the SJTU-UAV database, which includes 520 in-the-wild UGC audio and video (A/V) sequences, and conduct a user study to obtain the mean opinion scores of the A/V sequences.
no code implementations • 17 Feb 2023 • ZiCheng Zhang, Wei Sun, Yingjie Zhou, Wei Lu, Yucheng Zhu, Xiongkuo Min, Guangtao Zhai
Currently, great numbers of efforts have been put into improving the effectiveness of 3D model quality assessment (3DQA) methods.
no code implementations • CVPR 2023 • Junwen Xiong, Ganglai Wang, Peng Zhang, Wei Huang, Yufei zha, Guangtao Zhai
Incorporating the audio stream enables Video Saliency Prediction (VSP) to imitate the selective attention mechanism of human brain.
no code implementations • ICCV 2023 • Yuan Tian, Guo Lu, Guangtao Zhai, Zhiyong Gao
Most video compression methods aim to improve the decoded video visual quality, instead of particularly guaranteeing the semantic-completeness, which deteriorates downstream video analysis tasks, e. g., action recognition.
1 code implementation • 24 Dec 2022 • ZiCheng Zhang, Yingjie Zhou, Wei Sun, Wei Lu, Xiongkuo Min, Yu Wang, Guangtao Zhai
In recent years, large amounts of effort have been put into pushing forward the real-world application of dynamic digital human (DDH).
1 code implementation • 9 Oct 2022 • Yunhao Li, Zhenbo Yu, Yucheng Zhu, Bingbing Ni, Guangtao Zhai, Wei Shen
Stage I introduces a test time adaptation strategy, which improves the physical plausibility of synthesized human skeleton motions by optimizing skeleton joint locations.
1 code implementation • 3 Oct 2022 • Weixia Zhang, Dingquan Li, Xiongkuo Min, Guangtao Zhai, Guodong Guo, Xiaokang Yang, Kede Ma
No-reference image quality assessment (NR-IQA) aims to quantify how humans perceive visual distortions of digital images without access to their undistorted references.
1 code implementation • 20 Sep 2022 • ZiCheng Zhang, Yingjie Zhou, Wei Sun, Xiongkuo Min, Yuzhe Wu, Guangtao Zhai
Digital humans are attracting more and more research interest during the last decade, the generation, representation, rendering, and animation of which have been put into large amounts of effort.
1 code implementation • 1 Sep 2022 • ZiCheng Zhang, Wei Sun, Xiongkuo Min, Quan Zhou, Jun He, Qiyuan Wang, Guangtao Zhai
In specific, we split the point clouds into sub-models to represent local geometry distortions such as point shift and down-sampling.
Ranked #1 on
Point Cloud Quality Assessment
on SJTU-PCQA
no code implementations • 31 Aug 2022 • Wei Zhou, Qi Yang, Qiuping Jiang, Guangtao Zhai, Weisi Lin
Objective quality assessment of 3D point clouds is essential for the development of immersive multimedia systems in real-world applications.
no code implementations • 30 Aug 2022 • ZiCheng Zhang, Wei Sun, Yucheng Zhu, Xiongkuo Min, Wei Wu, Ying Chen, Guangtao Zhai
Point cloud is one of the most widely used digital representation formats for three-dimensional (3D) contents, the visual quality of which may suffer from noise and geometric shift distortions during the production procedure as well as compression and downsampling distortions during the transmission process.
no code implementations • 6 Jul 2022 • Huiyu Duan, Guangtao Zhai, Xiongkuo Min, Yucheng Zhu, Yi Fang, Xiaokang Yang
The original and distorted omnidirectional images, subjective quality ratings, and the head and eye movement data together constitute the OIQA database.
2 code implementations • 25 Jun 2022 • Wang Shen, Cheng Ming, Wenbo Bao, Guangtao Zhai, Li Chen, Zhiyong Gao
With AutoFI and SktFI, the interpolated animation frames show high perceptual quality.
no code implementations • 10 Jun 2022 • Tao Wang, ZiCheng Zhang, Wei Sun, Xiongkuo Min, Wei Lu, Guangtao Zhai
However, limited work has been put forward to tackle the problem of computer graphics generated images' quality assessment (CG-IQA).
no code implementations • 9 Jun 2022 • Yu Fan, ZiCheng Zhang, Wei Sun, Xiongkuo Min, Wei Lu, Tao Wang, Ning Liu, Guangtao Zhai
Point cloud is one of the most widely used digital formats of 3D models, the visual quality of which is quite sensitive to distortions such as downsampling, noise, and compression.
no code implementations • 9 Jun 2022 • Wei Lu, Wei Sun, Wenhan Zhu, Xiongkuo Min, ZiCheng Zhang, Tao Wang, Guangtao Zhai
In this paper, we first conduct an example experiment (i. e. the face detection task) to demonstrate that the quality of the SIs has a crucial impact on the performance of the IVSS, and then propose a saliency-based deep neural network for the blind quality assessment of the SIs, which helps IVSS to filter the low-quality SIs and improve the detection and recognition performance.
no code implementations • 9 Jun 2022 • ZiCheng Zhang, Wei Sun, Xiongkuo Min, Wenhan Zhu, Tao Wang, Wei Lu, Guangtao Zhai
Therefore, in this paper, we propose a no-reference deep-learning image quality assessment method based on frequency maps because the artifacts caused by SISR algorithms are quite sensitive to frequency information.
no code implementations • 9 Jun 2022 • Wei Lu, Wei Sun, Xiongkuo Min, Wenhan Zhu, Quan Zhou, Jun He, Qiyuan Wang, ZiCheng Zhang, Tao Wang, Guangtao Zhai
In this paper, we propose a deep learning-based BIQA model for 4K content, which on one hand can recognize true and pseudo 4K content and on the other hand can evaluate their perceptual visual quality.
no code implementations • 8 Jun 2022 • ZiCheng Zhang, Wei Sun, Wei Wu, Ying Chen, Xiongkuo Min, Guangtao Zhai
Nowadays, the mainstream full-reference (FR) metrics are effective to predict the quality of compressed images at coarse-grained levels (the bit rates differences of compressed images are obvious), however, they may perform poorly for fine-grained compressed images whose bit rates differences are quite subtle.
no code implementations • 4 Jun 2022 • Danyang Tu, Wei Sun, Xiongkuo Min, Guangtao Zhai, Wei Shen
We present a novel vision Transformer, named TUTOR, which is able to learn tubelet tokens, served as highly-abstracted spatiotemporal representations, for video-based human-object interaction (V-HOI) detection.
no code implementations • 12 May 2022 • Qiuping Jiang, Jiawu Xu, Yudong Mao, Wei Zhou, Xiongkuo Min, Guangtao Zhai
The DDB-Net contains three modules, i. e., an image decomposition module, a feature encoding module, and a bilinear pooling module.
1 code implementation • 29 Apr 2022 • Wei Sun, Xiongkuo Min, Wei Lu, Guangtao Zhai
The proposed model utilizes very sparse frames to extract spatial features and dense frames (i. e. the video chunk) with a very low spatial resolution to extract motion features, which thereby has low computational complexity.
Ranked #4 on
Video Quality Assessment
on YouTube-UGC
1 code implementation • 18 Apr 2022 • Huiyu Duan, Wei Shen, Xiongkuo Min, Danyang Tu, Jing Li, Guangtao Zhai
Therefore, in this paper, we mainly analyze the interaction effect between background (BG) scenes and AR contents, and study the saliency prediction problem in AR.
1 code implementation • 11 Apr 2022 • Huiyu Duan, Xiongkuo Min, Yucheng Zhu, Guangtao Zhai, Xiaokang Yang, Patrick Le Callet
An objective metric termed CFIQA is also proposed to better evaluate the confusing image quality.
no code implementations • CVPR 2022 • Danyang Tu, Xiongkuo Min, Huiyu Duan, Guodong Guo, Guangtao Zhai, Wei Shen
In contrast, we redefine the HGT detection task as detecting human head locations and their gaze targets, simultaneously.
no code implementations • 20 Mar 2022 • Danyang Tu, Xiongkuo Min, Huiyu Duan, Guodong Guo, Guangtao Zhai, Wei Shen
Iwin Transformer is a hierarchical Transformer which progressively performs token representation learning and token agglomeration within irregular windows.
no code implementations • 2 Mar 2022 • Yixuan Gao, Xiongkuo Min, Wenhan Zhu, Xiao-Ping Zhang, Guangtao Zhai
Experimental results verifythe feasibility of using alpha stable model to describe the IQSD, and prove the effectiveness of objective alpha stable model basedIQSD prediction method.
no code implementations • 6 Feb 2022 • Yuan Tian, Guo Lu, Yichao Yan, Guangtao Zhai, Li Chen, Zhiyong Gao
However, in real-world scenarios, the videos are first compressed before the transportation and then decompressed for understanding.
no code implementations • 3 Jan 2022 • Shunyu Yao, RuiZhe Zhong, Yichao Yan, Guangtao Zhai, Xiaokang Yang
Specifically, neural radiance field takes lip movements features and personalized attributes as two disentangled conditions, where lip movements are directly predicted from the audio inputs to achieve lip-synchronized generation.
no code implementations • CVPR 2022 • Jun Jia, Zhongpai Gao, Dandan Zhu, Xiongkuo Min, Guangtao Zhai, Xiaokang Yang
In addition, the automatic localization of hidden codes significantly reduces the time of manually correcting geometric distortions for photos, which is a revolutionary innovation for information hiding in mobile applications.
no code implementations • 5 Nov 2021 • Zhenxing Dong, Hong Cao, Wang Shen, Yu Gan, Yuye Ling, Guangtao Zhai, Yikai Su
In particular, we propose to use a convolutional neural network (CNN) to learn the cutoff frequency of real-world degradation process.
no code implementations • 10 Sep 2021 • Cheng Yang, Gene Cheung, Wai-tian Tan, Guangtao Zhai
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
1 code implementation • 19 Aug 2021 • Bowen Li, Weixia Zhang, Meng Tian, Guangtao Zhai, Xianpei Wang
The inaccessibility of reference videos with pristine quality and the complexity of authentic distortions pose great challenges for this kind of blind video quality assessment (BVQA) task.
Ranked #4 on
Video Quality Assessment
on MSU NR VQA Database
no code implementations • 7 Aug 2021 • Xinru Chen, Menghan Hu, Guangtao Zhai
Cough is a common symptom of respiratory and lung diseases.
no code implementations • 28 Jul 2021 • Weixia Zhang, Kede Ma, Guangtao Zhai, Xiaokang Yang
In this paper, we present a simple yet effective continual learning method for BIQA with improved quality prediction accuracy, plasticity-stability trade-off, and task-order/-length robustness.
no code implementations • 27 Jul 2021 • Hang Liu, Menghan Hu, Yuzhen Chen, Qingli Li, Guangtao Zhai, Simon X. Yang, Xiao-Ping Zhang, Xiaokang Yang
This work demonstrates that it is practicable for the blind people to feel the world through the brush in their hands.
1 code implementation • ICCV 2021 • Yuan Tian, Guo Lu, Xiongkuo Min, Zhaohui Che, Guangtao Zhai, Guodong Guo, Zhiyong Gao
After optimization, the downscaled video by our framework preserves more meaningful information, which is beneficial for both the upscaling step and the downstream tasks, e. g., video action recognition task.
1 code implementation • 22 Jul 2021 • Yuan Tian, Yichao Yan, Guangtao Zhai, Guodong Guo, Zhiyong Gao
In this paper, we propose a unified action recognition framework to investigate the dynamic nature of video content by introducing the following designs.
Ranked #13 on
Action Recognition
on Something-Something V1
2 code implementations • 5 Jul 2021 • ZiCheng Zhang, Wei Sun, Xiongkuo Min, Tao Wang, Wei Lu, Guangtao Zhai
Therefore, many related studies such as point cloud quality assessment (PCQA) and mesh quality assessment (MQA) have been carried out to measure the visual quality degradations of 3D models.
Ranked #3 on
Point Cloud Quality Assessment
on WPC
1 code implementation • CVPR 2021 • Yi Fang, Jiapeng Tang, Wang Shen, Wei Shen, Xiao Gu, Li Song, Guangtao Zhai
In the third stage, we use the generated dual attention as guidance to perform two sub-tasks: (1) identifying whether the gaze target is inside or out of the image; (2) locating the target if inside.
no code implementations • NeurIPS 2021 • Cheng Yang, Gene Cheung, Guangtao Zhai
We repose the SDR dual for solution $\bar{\mathbf{H}}$, then replace the PSD cone constraint $\bar{\mathbf{H}} \succeq 0$ with linear constraints derived from GDPA -- sufficient conditions to ensure $\bar{\mathbf{H}}$ is PSD -- so that the optimization becomes an LP per iteration.
1 code implementation • 2 Jun 2021 • Wei Sun, Tao Wang, Xiongkuo Min, Fuwang Yi, Guangtao Zhai
The proposed VQA framework consists of three modules, the feature extraction module, the quality regression module, and the quality pooling module.
Ranked #12 on
Video Quality Assessment
on MSU NR VQA Database
no code implementations • 17 Mar 2021 • Wang Shen, Wenbo Bao, Guangtao Zhai, Charlie L Wang, Jerry W Hu, Zhiyong Gao
An effective approach is to transmit frames in lower-quality under poor bandwidth conditions, such as using scalable video coding.
1 code implementation • 19 Feb 2021 • Weixia Zhang, Dingquan Li, Chao Ma, Guangtao Zhai, Xiaokang Yang, Kede Ma
In this paper, we formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets, building on what was learned from previously seen data.
1 code implementation • ICCV 2021 • Yunhao Li, Wei Shen, Zhongpai Gao, Yucheng Zhu, Guangtao Zhai, Guodong Guo
Specifically, the local region is obtained as a 2D cone-shaped field along the 2D projection of the sight line starting at the human subject's head position, and the distant region is obtained by searching along the sight line in 3D sphere space.
no code implementations • 20 Oct 2020 • Yunlu Wang, Cheng Yang, Menghan Hu, Jian Zhang, Qingli Li, Guangtao Zhai, Xiao-Ping Zhang
This paper presents an unobtrusive solution that can automatically identify deep breath when a person is walking past the global depth camera.
no code implementations • 9 Oct 2020 • Yuzhen Chen, Menghan Hu, Chunjun Hua, Guangtao Zhai, Jian Zhang, Qingli Li, Simon X. Yang
Aimed at solving the problem that we don't know which service stage of the mask belongs to, we propose a detection system based on the mobile phone.
no code implementations • 22 Jul 2020 • Yuan Tian, Guangtao Zhai, Zhiyong Gao
More specifically, an \textit{action perceptron synthesizer} is proposed to generate the kernels from a bag of fixed-size kernels that are interacted by dense routing paths.
1 code implementation • 28 May 2020 • Weixia Zhang, Kede Ma, Guangtao Zhai, Xiaokang Yang
Nevertheless, due to the distributional shift between images simulated in the laboratory and captured in the wild, models trained on databases with synthetic distortions remain particularly weak at handling realistic distortions (and vice versa).
1 code implementation • 27 May 2020 • Zhongpai Gao, Guangtao Zhai, Junchi Yan, Xiaokang Yang
Various point neural networks have been developed with isotropic filters or using weighting matrices to overcome the structure inconsistency on point clouds.
1 code implementation • 21 Apr 2020 • Zhongpai Gao, Junchi Yan, Guangtao Zhai, Juyong Zhang, Yiyan Yang, Xiaokang Yang
Mesh is a powerful data structure for 3D shapes.
no code implementations • 15 Apr 2020 • Zheng Jiang, Menghan Hu, Lei Fan, Yaling Pan, Wei Tang, Guangtao Zhai, Yong Lu
In this work, we perform the health screening through the combination of the RGB and thermal videos obtained from the dual-mode camera and deep learning architecture. We first accomplish a respiratory data capture technique for people wearing masks by using face recognition.
1 code implementation • CVPR 2020 • Wang Shen, Wenbo Bao, Guangtao Zhai, Li Chen, Xiongkuo Min, Zhiyong Gao
Existing works reduce motion blur and up-convert frame rate through two separate ways, including frame deblurring and frame interpolation.
no code implementations • 12 Feb 2020 • Yunlu Wang, Menghan Hu, Qingli Li, Xiao-Ping Zhang, Guangtao Zhai, Nan Yao
During the epidemic prevention and control period, our study can be helpful in prognosis, diagnosis and screening for the patients infected with COVID-19 (the novel coronavirus) based on breathing characteristics.
no code implementations • 12 Dec 2019 • Yucheng Zhu, Xiongkuo Min, Dandan Zhu, Ke Gu, Jiantao Zhou, Guangtao Zhai, Xiaokang Yang, Wenjun Zhang
The saliency annotations of head and eye movements for both original and augmented videos are collected and together constitute the ARVR dataset.
no code implementations • 3 Dec 2019 • Jun Jia, Zhongpai Gao, Kang Chen, Menghan Hu, Guangtao Zhai, Guodong Guo, Xiaokang Yang
To train a robust decoder against the physical distortion from the real world, a distortion network based on 3D rendering is inserted between the encoder and the decoder to simulate the camera imaging process.
1 code implementation • 18 Nov 2019 • Zhaohui Che, Ali Borji, Guangtao Zhai, Suiyi Ling, Jing Li, Patrick Le Callet
Deep neural networks are vulnerable to adversarial attacks.
no code implementations • 25 Sep 2019 • Zhongpai Gao, Juyong Zhang, Yudong Guo, Chao Ma, Guangtao Zhai, Xiaokang Yang
Moreover, the identity and expression representations are entangled in these models, which hurdles many facial editing applications.
1 code implementation • 1 Jul 2019 • Weixia Zhang, Kede Ma, Guangtao Zhai, Xiaokang Yang
Computational models for blind image quality assessment (BIQA) are typically trained in well-controlled laboratory environments with limited generalizability to realistically distorted images.
1 code implementation • 16 May 2019 • Zhaohui Che, Ali Borji, Guangtao Zhai, Xiongkuo Min, Guodong Guo, Patrick Le Callet
Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time consuming and expensive.
no code implementations • 2 Apr 2019 • Zhaohui Che, Ali Borji, Guangtao Zhai, Suiyi Ling, Guodong Guo, Patrick Le Callet
The proposed attack only requires a part of the model information, and is able to generate a sparser and more insidious adversarial perturbation, compared to traditional image-space attacks.
1 code implementation • 10 Oct 2018 • Zhaohui Che, Ali Borji, Guangtao Zhai, Xiongkuo Min
Most of current studies on human gaze and saliency modeling have used high-quality stimuli.
no code implementations • 12 Jul 2017 • Menghan Hu, Xiongkuo Min, Guangtao Zhai, Wenhan Zhu, Yucheng Zhu, Zhaodi Wang, Xiaokang Yang, Guang Tian
Subsequently, the existing no-reference IQA algorithms, which were 5 opinion-aware approaches viz., NFERM, GMLF, DIIVINE, BRISQUE and BLIINDS2, and 8 opinion-unaware approaches viz., QAC, SISBLIM, NIQE, FISBLIM, CPBD, S3 and Fish_bb, were executed for the evaluation of the THz security image quality.