Semantic segmentation is a scene understanding task at the heart of safety-critical applications where robustness to corrupted inputs is essential.
To articulate the significance of the model perspective in novelty detection, we utilize backpropagated gradients.
To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information.
We investigate the effect of challenging conditions through spectral analysis and show that challenging conditions can lead to distinct magnitude spectrum characteristics.
In this paper, we utilize weight gradients from backpropagation to characterize the representation space learned by deep learning algorithms.
Based on the conducted experiments, proposed algorithm RAPDNet can achieve a sensitivity and a specificity of 90. 6% over 64 test cases in a balanced set, which corresponds to an AUC of 0. 929 in ROC analysis.
Scene understanding and semantic segmentation are at the core of many computer vision tasks, many of which, involve interacting with humans in potentially dangerous ways.
In this paper, we introduce a portable eye imaging device denoted as lab-on-a-headset, which can automatically perform a swinging flashlight test.
Experimental results show that benchmarked algorithms are highly sensitive to tested challenging conditions, which result in an average performance drop of 0. 17 in terms of precision and a performance drop of 0. 28 in recall under severe conditions.
Experimental results show that deep learning-based image representations can estimate the recognition performance variation with a Spearman's rank-order correlation of 0. 94 under multifarious acquisition conditions.
In this paper, we generate and control semantically interpretable filters that are directly learned from natural images in an unsupervised fashion.
We use multiple linear decoders to capture different abstraction levels of the image patches.
In this work, we compare the state of the art quality and content-based spatial pooling strategies and show that although features are the key in any image quality assessment, pooling also matters.
While assessing image quality, the filters need to capture perceptual differences based on dissimilarities between a reference image and its distorted version.
Moreover, BleSS significantly enhances the performance of SR-SIM and FSIM in the full TID 2013 database.
In terms of the Pearson and the Spearman correlation, ReSIFT is the best performing quality estimator in the overall databases.
In this paper, we propose an algorithm denoted as HeartBEAT that tracks heart rate from wrist-type photoplethysmography (PPG) signals and simultaneously recorded three-axis acceleration data.
Moreover, we investigate the relationship between object recognition and image quality and show that objective quality algorithms can estimate recognition performance under certain photometric challenging conditions.
Robust and reliable traffic sign detection is necessary to bring autonomous vehicles onto our roads.
This paper presents a full-reference image quality estimator based on color, structure, and visual system characteristics denoted as CSV.
To overcome these shortcomings, we introduce an image quality assessment algorithm based on the Spectral Understanding of Multi-scale and Multi-channel Error Representations, denoted as SUMMER.
We benchmark the performance of existing solutions in real-world scenarios and analyze the performance variation with respect to challenging conditions.