Datasets > Modality > Texts > Visual7W

Visual7W

Introduced by Zhu et al. in Visual7W: Grounded Question Answering in Images

Visual7W is a large-scale visual question answering (QA) dataset, with object-level groundings and multimodal answers. Each question starts with one of the seven Ws, what, where, when, who, why, how and which. It is collected from 47,300 COCO iamges and it has 327,929 QA pairs, together with 1,311,756 human-generated multiple-choices and 561,459 object groundings from 36,579 categories.

Source: https://github.com/yukezhu/visual7w-toolkit

Samples

License

  • Unknown

Modalities

Languages

Tasks