Recurrent Models for Situation Recognition

ICCV 2017  ·  Arun Mallya, Svetlana Lazebnik ·

This work proposes Recurrent Neural Network (RNN) models to predict structured 'image situations' -- actions and noun entities fulfilling semantic roles related to the action. In contrast to prior work relying on Conditional Random Fields (CRFs), we use a specialized action prediction network followed by an RNN for noun prediction. Our system obtains state-of-the-art accuracy on the challenging recent imSitu dataset, beating CRF-based models, including ones trained with additional data. Further, we show that specialized features learned from situation prediction can be transferred to the task of image captioning to more accurately describe human-object interactions.

PDF Abstract ICCV 2017 PDF ICCV 2017 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Situation Recognition imSitu RNN + Fusion Top-1 Verb 35.9 # 10
Top-1 Verb & Value 27.45 # 10
Top-5 Verbs 63.08 # 9
Top-5 Verbs & Value 46.88 # 9
Grounded Situation Recognition SWiG RNN + Fusion Top-1 Verb 35.9 # 10
Top-1 Verb & Value 27.45 # 10
Top-5 Verbs 63.08 # 9
Top-5 Verbs & Value 46.88 # 9

Methods


No methods listed for this paper. Add relevant methods here