Multimodal Differential Network for Visual Question Generation

Generating natural questions from an image is a semantic task that requires using visual and language modality to learn multimodal representations. Images can have multiple visual and language contexts that are relevant for generating questions namely places, captions, and tags. In this paper, we propose the use of exemplars for obtaining the relevant context. We obtain this by using a Multimodal Differential Network to produce natural and engaging questions. The generated questions show a remarkable similarity to the natural questions as validated by a human study. Further, we observe that the proposed approach substantially improves over state-of-the-art benchmarks on the quantitative metrics (BLEU, METEOR, ROUGE, and CIDEr).

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Generation COCO Visual Question Answering (VQA) real images 1.0 open ended MDN BLEU-1 65.1 # 1
Question Generation Visual Question Generation MDN BLEU-1 36.0 # 1

Methods


No methods listed for this paper. Add relevant methods here