Search Results for author: Arijit Ray

Found 8 papers, 0 papers with code

Improving Users' Mental Model with Attention-directed Counterfactual Edits

no code implementations13 Oct 2021 Kamran Alipour, Arijit Ray, Xiao Lin, Michael Cogswell, Jurgen P. Schulze, Yi Yao, Giedrius T. Burachas

In the domain of Visual Question Answering (VQA), studies have shown improvement in users' mental model of the VQA system when they are exposed to examples of how these systems answer certain Image-Question (IQ) pairs.

Question Answering Retrieval +2

The Impact of Explanations on AI Competency Prediction in VQA

no code implementations2 Jul 2020 Kamran Alipour, Arijit Ray, Xiao Lin, Jurgen P. Schulze, Yi Yao, Giedrius T. Burachas

In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA).

Language Modelling Question Answering +2

Can You Explain That? Lucid Explanations Help Human-AI Collaborative Image Retrieval

no code implementations5 Apr 2019 Arijit Ray, Yi Yao, Rakesh Kumar, Ajay Divakaran, Giedrius Burachas

Our experiments, therefore, demonstrate that ExAG is an effective means to evaluate the efficacy of AI-generated explanations on a human-AI collaborative task.

Image Retrieval Question Answering +3

Generating Natural Language Explanations for Visual Question Answering using Scene Graphs and Visual Attention

no code implementations15 Feb 2019 Shalini Ghosh, Giedrius Burachas, Arijit Ray, Avi Ziskind

In this paper, we present a novel approach for the task of eXplainable Question Answering (XQA), i. e., generating natural language (NL) explanations for the Visual Question Answering (VQA) problem.

Explanation Generation Language Modelling +3

Cannot find the paper you are looking for? You can Submit a new open access paper.