84 papers with code • 2 benchmarks • 7 datasets
A saliency map is a model that predicts eye fixations on a visual scene. Saliency prediction is informed by the human visual attention mechanism and predicts the possibility of the human eyes to stay in a certain position in the scene.
LibrariesUse these libraries to find Saliency Prediction models and implementations
To develop robust representations for this challenging task, high-level visual features at multiple spatial scales must be extracted and augmented with contextual information.
In this paper, we introduce and tackle the simultaneous enhancement and super-resolution (SESR) problem for underwater robot vision and provide an efficient solution for near real-time applications.
Our framework includes two main models: 1) a generator model, which maps the input image and latent variable to stochastic saliency prediction, and 2) an inference model, which gradually updates the latent variable by sampling it from the true or approximate posterior distribution.
We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples.
In this paper, we present a conditional generative adversarial network-based model for real-time underwater image enhancement.
We also present a benchmark evaluation of state-of-the-art semantic segmentation approaches based on standard performance metrics.
Current state of the art models for saliency prediction employ Fully Convolutional networks that perform a non-linear combination of features extracted from the last convolutional layer to predict saliency maps.