Neural Reasoning, Fast and Slow, for Video Question Answering

10 Jul 2019  ·  Thao Minh Le, Vuong Le, Svetha Venkatesh, Truyen Tran ·

What does it take to design a machine that learns to answer natural questions about a video? A Video QA system must simultaneously understand language, represent visual content over space-time, and iteratively transform these representations in response to lingual content in the query, and finally arriving at a sensible answer. While recent advances in lingual and visual question answering have enabled sophisticated representations and neural reasoning mechanisms, major challenges in Video QA remain on dynamic grounding of concepts, relations and actions to support the reasoning process. Inspired by the dual-process account of human reasoning, we design a dual process neural architecture, which is composed of a question-guided video processing module (System 1, fast and reactive) followed by a generic reasoning module (System 2, slow and deliberative). System 1 is a hierarchical model that encodes visual patterns about objects, actions and relations in space-time given the textual cues from the question. The encoded representation is a set of high-level visual features, which are then passed to System 2. Here multi-step inference follows to iteratively chain visual elements as instructed by the textual elements. The system is evaluated on the SVQA (synthetic) and TGIF-QA datasets (real), demonstrating competitive results, with a large margin in the case of multi-step reasoning.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here