VQA-HAT (Human ATtention) is a dataset to evaluate the informative regions of an image depending on the question being asked about it. The dataset consists of human visual attention maps over the images in the original VQA dataset. It contains more than 60k attention maps.
Source: Human Attention in Visual Question Answering:Paper | Code | Results | Date | Stars |
---|