Bilinear Graph Networks for Visual Question Answering

23 Jul 2019  ·  Dalu Guo, Chang Xu, DaCheng Tao ·

This paper revisits the bilinear attention networks in the visual question answering task from a graph perspective. The classical bilinear attention networks build a bilinear attention map to extract the joint representation of words in the question and objects in the image but lack fully exploring the relationship between words for complex reasoning. In contrast, we develop bilinear graph networks to model the context of the joint embeddings of words and objects. Two kinds of graphs are investigated, namely image-graph and question-graph. The image-graph transfers features of the detected objects to their related query words, enabling the output nodes to have both semantic and factual information. The question-graph exchanges information between these output nodes from image-graph to amplify the implicit yet important relationship between objects. These two kinds of graphs cooperate with each other, and thus our resulting model can model the relationship and dependency between objects, which leads to the realization of multi-step reasoning. Experimental results on the VQA v2.0 validation dataset demonstrate the ability of our method to handle the complex questions. On the test-std set, our best single model achieves state-of-the-art performance, boosting the overall accuracy to 72.41%.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Visual Question Answering (VQA) GQA Test2019 GRN Accuracy 61.22 # 19
Binary 78.69 # 19
Open 45.81 # 27
Consistency 90.31 # 27
Plausibility 85.43 # 10
Validity 96.36 # 39
Distribution 6.77 # 34
Visual Question Answering (VQA) VQA v2 test-std BGN, ensemble overall 75.92 # 16
yes/no 90.89 # 7
number 61.13 # 7
other 66.28 # 7

Methods


No methods listed for this paper. Add relevant methods here