Neural Self Talk: Image Understanding via Continuous Questioning and Answering

10 Dec 2015  ·  Yezhou Yang, Yi Li, Cornelia Fermuller, Yiannis Aloimonos ·

In this paper we consider the problem of continuously discovering image contents by actively asking image based questions and subsequently answering the questions being asked. The key components include a Visual Question Generation (VQG) module and a Visual Question Answering module, in which Recurrent Neural Networks (RNN) and Convolutional Neural Network (CNN) are used. Given a dataset that contains images, questions and their answers, both modules are trained at the same time, with the difference being VQG uses the images as input and the corresponding questions as output, while VQA uses images and questions as input and the corresponding answers as output. We evaluate the self talk process subjectively using Amazon Mechanical Turk, which show effectiveness of the proposed method.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Generation COCO Visual Question Answering (VQA) real images 1.0 open ended Sample(Yang,2015) BLEU-1 38.8 # 4
Question Generation COCO Visual Question Answering (VQA) real images 1.0 open ended Max(Yang,2015) BLEU-1 59.4 # 3

Methods


No methods listed for this paper. Add relevant methods here