no code implementations • 30 Nov 2023 • Saurabh Farkya, Aswin Raghavan, Avi Ziskind
We present a method to improve the robustness of quantized DNNs to white-box adversarial attacks.
no code implementations • 6 Jul 2022 • Avi Ziskind, Sujeong Kim, Giedrius T. Burachas
Herein, we propose a framework for evaluating visual representations for illumination invariance in the context of depth perception.
no code implementations • 1 Mar 2020 • Kamran Alipour, Jurgen P. Schulze, Yi Yao, Avi Ziskind, Giedrius Burachas
Explainability and interpretability of AI models is an essential factor affecting the safety of AI.
Explainable Artificial Intelligence (XAI) Question Answering +1
no code implementations • 15 Feb 2019 • Shalini Ghosh, Giedrius Burachas, Arijit Ray, Avi Ziskind
In this paper, we present a novel approach for the task of eXplainable Question Answering (XQA), i. e., generating natural language (NL) explanations for the Visual Question Answering (VQA) problem.