|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Despite the noticeable progress in perceptual tasks like detection, instance segmentation and human parsing, computers still perform unsatisfactorily on visually understanding humans in crowded scenes, such as group behavior analysis, person re-identification and autonomous driving, etc.
SOTA for Multi-Human Parsing on MHP v2.0
We introduce SalGAN, a deep convolutional neural network for visual saliency prediction trained with adversarial examples.
As an important problem in computer vision, salient object detection (SOD) from images has been attracting an increasing amount of research effort over the years.
The prediction of salient areas in images has been traditionally addressed with hand-crafted features based on neuroscience principles.
Data-driven saliency has recently gained a lot of attention thanks to the use of Convolutional Neural Networks for predicting gaze fixations.
SOTA for Saliency Prediction on 2010 i2b2/VA (using extra training data)
Current state of the art models for saliency prediction employ Fully Convolutional networks that perform a non-linear combination of features extracted from the last convolutional layer to predict saliency maps.
We present a new computational model for gaze prediction in egocentric videos by exploring patterns in temporal shift of gaze fixations (attention transition) that are dependent on egocentric manipulation tasks.