Further, we propose a local distortion extractor to obtain local distortion features from the pretrained CNN and a local distortion injector to inject the local distortion features into ViT.
On ImageNet, we achieve up to 4. 7% and 4. 6% higher top-1 accuracy compared to other methods for VGG-16 and ResNet-50, respectively.
Our approach to IQA involves the design of a heuristic coarse-to-fine network (CFANet) that leverages multi-scale features and progressively propagates multi-level semantic information to low-level representations in a top-down manner.
Regression-based blind image quality assessment (IQA) models are susceptible to biased training samples, leading to a biased estimation of model parameters.
To address this gap, we propose SJTU-H3D, a subjective quality assessment database specifically designed for full-body digital humans.
Extensive experiments have demonstrated that our approach outperforms recent state-of-the-art methods in R-D performance, visual quality, and downstream applications, at very low bitrates.
Model-based 3DQA methods extract features directly from the 3D models, which are characterized by their high degree of complexity.
With the rapid advancements of the text-to-image generative model, AI-generated images (AGIs) have been widely applied to entertainment, education, social media, etc.
Though subjective studies have collected overall quality scores for these videos, how the abstract quality scores relate with specific factors is still obscure, hindering VQA methods from more concrete quality evaluations (e. g. sharpness of a video).
However, most existing deep clustering methods learn consensus representation or view-specific representations from multiple views via view-wise aggregation way, where they ignore structure relationship of all samples.
The proliferation of videos collected during in-the-wild natural settings has pushed the development of effective Video Quality Assessment (VQA) methodologies.
Blind image quality assessment (BIQA) aims at automatically and accurately forecasting objective scores for visual signals, which has been widely used to monitor product and service quality in low-light applications, covering smartphone photography, video surveillance, autonomous driving, etc.
To benefit JND modeling, this work establishes a generalized JND dataset with a coarse-to-fine JND selection, which contains 106 source images and 1, 642 JND maps, covering 25 distortion types.
A popular track of network compression approach is Quantization aware Training (QAT), which accelerates the forward pass during the neural network training and inference.
Recent learning-based video quality assessment (VQA) algorithms are expensive to implement due to the cost of data collection of human quality opinions, and are less robust across various scenarios due to the biases of these opinions.
After combining them together, we can better assign the distortion in the compressed image with the guidance of JND to preserve the high perceptual quality.
In this paper, we propose the first simulation model to reveal facial pore changes after using skincare products.
Then, semantic kernels are used to activate salient object locations in two groups of high-level features through dynamic convolution operations in DSMM.
In this paper, we analyze the degradation of a high-resolution (HR) image from image intrinsic components according to a degradation-based formulation model.
Recent blind SR methods suggest to reconstruct SR images relying on blur kernel estimation.
We propose IntegratedPIFu, a new pixel aligned implicit model that builds on the foundation set by PIFuHD.
In light of this, we propose the Disentangled Objective Video Quality Evaluator (DOVER) to learn the quality of UGC videos based on the two perspectives.
Ranked #1 on Video Quality Assessment on LIVE-VQC
On the other hand, existing practices, such as resizing and cropping, will change the quality of original videos due to the loss of details and contents, and are therefore harmful to quality assessment.
Ranked #2 on Video Quality Assessment on KoNViD-1k (using extra training data)
Objective quality assessment of 3D point clouds is essential for the development of immersive multimedia systems in real-world applications.
However, they have a major drawback that the generated JND is assessed in the real-world signal domain instead of in the perceptual domain in the human brain.
Experiments show that the perceptual representation in the HVS is an effective way of predicting subjective temporal quality, and thus TPQI can, for the first time, achieve comparable performance to the spatial quality metric and be even more effective in assessing videos with large temporal variations.
Consisting of fragments and FANet, the proposed FrAgment Sample Transformer for VQA (FAST-VQA) enables efficient end-to-end deep VQA and learns effective video-quality-related representations.
Ranked #2 on Video Quality Assessment on LIVE-FB LSVQ
Based on prominent time-series modeling ability of transformers, we propose a novel and effective transformer-based VQA method to tackle these two issues.
Ranked #5 on Video Quality Assessment on KoNViD-1k
To achieve this, an adversarial loss is firstly proposed to make the deep learning models attacked by the adversarial images successfully.
To deal with this dilemma, we propose to distill knowledge on semantic patterns for a vast variety of image contents from multiple pre-trained object classification (POC) models to an IAA model.
The global and local contexts significantly contribute to the integrity of predictions in Salient Object Detection (SOD).
Ranked #1 on Salient Object Detection on PASCAL-S
no code implementations • 30 Mar 2022 • Qian Zheng, Ankur Purwar, Heng Zhao, Guang Liang Lim, Ling Li, Debasish Behera, Qian Wang, Min Tan, Rizhao Cai, Jennifer Werner, Dennis Sng, Maurice van Steensel, Weisi Lin, Alex C Kot
We present an automatic facial skin feature detection method that works across a variety of skin tones and age groups for selfies in the wild.
As the key component of ACCoNet, ACCoM activates the salient regions of output features of the encoder and transmits them to the decoder.
Besides, the JND of the red and blue channels are larger than that of the green one according to the experimental results of the proposed model, which demonstrates that more changes can be tolerated in the red and blue channels, in line with the well-known fact that the human visual system is more sensitive to the green channel in comparison with the red and blue ones.
Then, following the coarse-to-fine strategy, we generate an initial coarse saliency map from high-level semantic features in a Correlation Module (CorrM).
Experimental results show that the VSD can be accurately estimated with the weights learnt by the nonlinear mapping function once its associated S-VSDs are available.
Meanwhile, an image predictor is designed and trained to achieve the general-quality image reconstruction with the 16-bit gray-scale profile and signal features.
In this paper, we propose a novel Multi-Content Complementation Network (MCCNet) to explore the complementarity of multiple content for RSI-SOD.
Instead of explicitly formulating and fusing different masking effects in a bottom-up way, the proposed JND estimation model dedicates to first predicting a critical perceptual lossless (CPL) counterpart of the original image and then calculating the difference map between the original image and the predicted CPL image as the JND map.
As a prevailing task in video surveillance and forensics field, person re-identification (re-ID) aims to match person images captured from non-overlapped cameras.
Then, we design a two-level perturbation fusion strategy to alleviate the conflict between the adversarial watermarks generated by different facial images and models.
In video-based dynamic point cloud compression (V-PCC), 3D point clouds are projected onto 2D images for compressing with the existing video codecs.
Experimental results on image classification demonstrate that we successfully find the JND for deep machine vision.
We present a simple yet effective progressive self-guided loss function to facilitate deep learning-based salient object detection (SOD) in images.
In this paper, we study this problem from the viewpoint of adversarial attack and identify a totally new task, i. e., adversarial exposure attack generating adversarial images by tuning image exposure to mislead the DNNs with significantly high transferability.
To this end, we initiate the very first attempt to study this problem from the perspective of adversarial attack and propose the adversarial denoise attack.
Existing CNN-based methods for pixel labeling heavily depend on multi-scale features to meet the requirements of both semantic comprehension and detail preservation.
Given an image, our quality measure first extracts 17 features through analysis of contrast, sharpness, brightness and more, and then yields a measre of visual quality using a regression module, which is learned with big-data training samples that are much bigger than the size of relevant image datasets.
It is challenging to detect curve texts due to their irregular shapes and varying sizes.
The proposed method, SFA, is compared with nine representative blur-specific NR-IQA methods, two general-purpose NR-IQA methods, and two extra full-reference IQA methods on Gaussian blur images (with and without Gaussian noise/JPEG compression) and realistic blur images from multiple databases, including LIVE, TID2008, TID2013, MLIVE1, MLIVE2, BID, and CLIVE.
The recent advances of hardware technology have made the intelligent analysis equipped at the front-end with deep learning more prevailing and practical.
In this study, we conduct the research on the robustness of pedestrian detection algorithms to video quality degradation.
MCN predicts instance-level bounding boxes by firstly converting an image into a Stochastic Flow Graph (SFG) and then performing Markov Clustering on this graph.
With the acquisition technology development, more comprehensive information, such as depth cue, inter-image correspondence, or temporal relationship, is available to extend image saliency detection to RGBD saliency detection, co-saliency detection, or video saliency detection.
In this paper, we propose an iterative RGBD co-saliency framework, which utilizes the existing single saliency maps as the initialization, and generates the final RGBD cosaliency map by using a refinement-cycle model.
For many computer vision problems, the deep neural networks are trained and validated based on the assumption that the input images are pristine (i. e., artifact-free).