no code implementations • 6 Jul 2022 • Avi Ziskind, Sujeong Kim, Giedrius T. Burachas
Herein, we propose a framework for evaluating visual representations for illumination invariance in the context of depth perception.
no code implementations • 1 Mar 2020 • Kamran Alipour, Jurgen P. Schulze, Yi Yao, Avi Ziskind, Giedrius Burachas
Explainability and interpretability of AI models is an essential factor affecting the safety of AI.
no code implementations • 15 Feb 2019 • Shalini Ghosh, Giedrius Burachas, Arijit Ray, Avi Ziskind
In this paper, we present a novel approach for the task of eXplainable Question Answering (XQA), i. e., generating natural language (NL) explanations for the Visual Question Answering (VQA) problem.