|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
The latent spaces of typical GAN models often have semantically meaningful directions.
In this paper, we present a deep-learning-based method where a novel memory-oriented decoder is tailored for light field saliency detection.
In this paper, we develop a multi-task motion guided video salient object detection network, which learns to accomplish two sub-tasks using two sub-networks, one sub-network for salient object detection in still images and the other for motion saliency detection in optical flow images.
It consists of two building blocks: first, the encoder network extracts low-resolution spatiotemporal features from an input clip of several consecutive frames, and then the following prediction network decodes the encoded features spatially while aggregating all the temporal information.
Light field imaging presents an attractive alternative to RGB imaging because of the recording of the direction of the incoming light.
Similar to IQA models, the structural dissimilarity is computed based on the correlation of the structural features.