In seismic interpretation, pixel-level labels of various rock structures can be time-consuming and expensive to obtain.
Neural networks trained to classify images do so by identifying features that allow them to distinguish between classes.
To articulate the significance of the model perspective in novelty detection, we utilize backpropagated gradients.
In this paper, we show that existing recognition and localization deep architectures, that have not been exposed to eye tracking data or any saliency datasets, are capable of predicting the human visual saliency.
To complement the learned information from activation-based representation, we propose utilizing a gradient-based representation that explicitly focuses on missing information.
In this paper, we utilize weight gradients from backpropagation to characterize the representation space learned by deep learning algorithms.
In this paper, we generate and control semantically interpretable filters that are directly learned from natural images in an unsupervised fashion.
While assessing image quality, the filters need to capture perceptual differences based on dissimilarities between a reference image and its distorted version.
We use multiple linear decoders to capture different abstraction levels of the image patches.
We benchmark the performance of existing solutions in real-world scenarios and analyze the performance variation with respect to challenging conditions.