Datasets > Modality > Images > VQA-HAT (VQA Human Attention)

VQA-HAT (Human ATtention) is a dataset to evaluate the informative regions of an image depending on the question being asked about it. The dataset consists of human visual attention maps over the images in the original VQA dataset. It contains more than 60k attention maps.

Source: Human Attention in Visual Question Answering:

Samples

License

  • Unknown

Modalities

Languages

Tasks