Learning Compositional Representation for Few-shot Visual Question Answering

21 Feb 2021  ·  Dalu Guo, DaCheng Tao ·

Current methods of Visual Question Answering perform well on the answers with an amount of training data but have limited accuracy on the novel ones with few examples. However, humans can quickly adapt to these new categories with just a few glimpses, as they learn to organize the concepts that have been seen before to figure the novel class, which are hardly explored by the deep learning methods. Therefore, in this paper, we propose to extract the attributes from the answers with enough data, which are later composed to constrain the learning of the few-shot ones. We generate the few-shot dataset of VQA with a variety of answers and their attributes without any human effort. With this dataset, we build our attribute network to disentangle the attributes by learning their features from parts of the image instead of the whole one. Experimental results on the VQA v2.0 validation dataset demonstrate the effectiveness of our proposed attribute network and the constraint between answers and their corresponding attributes, as well as the ability of our method to handle the answers with few training examples.

PDF Abstract

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here