Recent scene graph generation (SGG) models have shown their capability of capturing the most frequent relations among visual entities.
Our approach outperforms the state-of-the-art on all datasets.
Sequential vision-to-language or visual storytelling has recently been one of the areas of focus in computer vision and language modeling domains.
Our experiments show that the effect of the visual features in our system is small.
To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification.
Ranked #3 on Aerial Scene Classification on AID (20% as trainset)
This paper revisits visual saliency prediction by evaluating the recent advancements in this field such as crowd-sourced mouse tracking-based databases and contextual annotations.
The motivation behind such a problem formulation is (1) the benefits to the knowledge representation-based vision pipelines, and (2) the potential improvements in emulating bio-inspired vision systems by solving these three problems together.
To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene.
To investigate the current status in regard to affective image tagging, we (1) introduce a new eye movement dataset using an affordable eye tracker, (2) study the use of deep neural networks for pleasantness recognition, (3) investigate the gap between deep features and eye movements.
Most approaches to human attribute and action recognition in still images are based on image representation in which multi-scale local features are pooled across scale into a single, scale-invariant encoding.
This paper presents a novel fixation prediction and saliency modeling framework based on inter-image similarities and ensemble of Extreme Learning Machines (ELM).
We present our submission to the Microsoft Video to Language Challenge of generating short captions describing videos in the challenge dataset.
In this paper, we describe the system for generating textual descriptions of short video clips using recurrent neural networks (RNN), which we used while participating in the Large Scale Movie Description Challenge 2015 in ICCV 2015.
It then retrieves images with a specialized online learning algorithm that balances the tradeoff between exploring new images and exploiting the already inferred interests of the user.
In this paper we present S-pot, a benchmark setting for evaluating the performance of automatic spotting of signs in continuous sign language videos.
We present a software toolkit called SLMotion which provides a framework for automatic and semiautomatic analysis, feature extraction and annotation of individual sign language videos, and which can easily be adapted to batch processing of entire sign language corpora.
We consider a non-intrusive computer-vision method for measuring the motion of a person performing natural signing in video recordings.