Search Results for author: Moshiur R. Farazi

Found 4 papers, 0 papers with code

Accuracy vs. Complexity: A Trade-off in Visual Question Answering Models

no code implementations20 Jan 2020 Moshiur R. Farazi, Salman H. Khan, Nick Barnes

However, modelling the visual and semantic features in a high dimensional (joint embedding) space is computationally expensive, and more complex models often result in trivial improvements in the VQA accuracy.

Question Answering Visual Question Answering

Question-Agnostic Attention for Visual Question Answering

no code implementations9 Aug 2019 Moshiur R. Farazi, Salman H. Khan, Nick Barnes

Visual Question Answering (VQA) models employ attention mechanisms to discover image locations that are most relevant for answering a specific question.

Question Answering Visual Question Answering

Reciprocal Attention Fusion for Visual Question Answering

no code implementations11 May 2018 Moshiur R. Farazi, Salman H. Khan

Existing attention mechanisms either attend to local image grid or object level features for Visual Question Answering (VQA).

Object Question Answering +2

Cannot find the paper you are looking for? You can Submit a new open access paper.