The VideoNavQA dataset contains pairs of questions and videos generated in the House3D environment. The goal of this dataset is to assess question-answering performance from nearly-ideal navigation paths, while considering a much more complete variety of questions than current instantiations of the Embodied Question Answering (EQA) task.
VideoNavQA contains approximately 101,000 pairs of videos and questions, 28 types of questions belonging to 8 categories, with 70 possible answers. Each question type is associated with a template that facilitates programmatic generation using ground truth information extracted from the video. The complexity of the questions in the dataset is far beyond that of other similar tasks using this generation method (such as CLEVR): the questions involve single or multiple object/room existence, object/room counting, object color recognition and localization, spatial reasoning, object/room size comparison and equality of object attributes (color, room location).Source: VideoNavQA: Bridging the Gap between Visual and Embodied Question Answering