no code implementations • 14 Jun 2022 • Aditya Lahiri, Kamran Alipour, Ehsan Adeli, Babak Salimi
With the widespread use of sophisticated machine learning models in sensitive applications, understanding their decision-making has become an essential task.
no code implementations • 10 Jun 2022 • Kamran Alipour, Aditya Lahiri, Ehsan Adeli, Babak Salimi, Michael Pazzani
Despite their high accuracies, modern complex image classifiers cannot be trusted for sensitive tasks due to their unknown decision-making process and potential biases.
no code implementations • 13 Oct 2021 • Kamran Alipour, Arijit Ray, Xiao Lin, Michael Cogswell, Jurgen P. Schulze, Yi Yao, Giedrius T. Burachas
In the domain of Visual Question Answering (VQA), studies have shown improvement in users' mental model of the VQA system when they are exposed to examples of how these systems answer certain Image-Question (IQ) pairs.
no code implementations • 26 Mar 2021 • Arijit Ray, Michael Cogswell, Xiao Lin, Kamran Alipour, Ajay Divakaran, Yi Yao, Giedrius Burachas
Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err.
no code implementations • 2 Jul 2020 • Kamran Alipour, Arijit Ray, Xiao Lin, Jurgen P. Schulze, Yi Yao, Giedrius T. Burachas
In this paper, we evaluate the impact of explanations on the user's mental model of AI agent competency within the task of visual question answering (VQA).
no code implementations • 19 Apr 2020 • Ali Hariri, Kamran Alipour, Yash Mantri, Jurgen P. Schulze, Jesse V. Jokerst
We suggest that this tool can improve the value of such sources in photoacoustic imaging.
no code implementations • 1 Mar 2020 • Kamran Alipour, Jurgen P. Schulze, Yi Yao, Avi Ziskind, Giedrius Burachas
Explainability and interpretability of AI models is an essential factor affecting the safety of AI.