Show, Ask, Attend, and Answer: A Strong Baseline For Visual Question Answering

11 Apr 2017  Â·  Vahid Kazemi, Ali Elqursh ·

This paper presents a new baseline for visual question answering task. Given an image and a question in natural language, our model produces accurate answers according to the content of the image. Our model, while being architecturally simple and relatively small in terms of trainable parameters, sets a new state of the art on both unbalanced and balanced VQA benchmark. On VQA 1.0 open ended challenge, our model achieves 64.6% accuracy on the test-standard set without using additional data, an improvement of 0.4% over state of the art, and on newly released VQA 2.0, our model scores 59.7% on validation set outperforming best previously reported results by 0.5%. The results presented in this paper are especially interesting because very similar models have been tried before but significantly lower performance were reported. In light of the new results we hope to see more meaningful research on visual question answering in the future.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Question Answering (VQA) VQA v1 test-dev SAAA (ResNet) Accuracy 64.5 # 1
Visual Question Answering (VQA) VQA v1 test-std SAAA (ResNet) Accuracy 64.6 # 1

Methods