no code implementations • 26 Mar 2021 • Arijit Ray, Michael Cogswell, Xiao Lin, Kamran Alipour, Ajay Divakaran, Yi Yao, Giedrius Burachas
Hence, we propose Error Maps that clarify the error by highlighting image regions where the model is prone to err.
no code implementations • 19 Nov 2020 • Meng Ye, Xiao Lin, Giedrius Burachas, Ajay Divakaran, Yi Yao
Few-Shot Learning (FSL) aims to improve a model's generalization capability in low data regimes.
no code implementations • 1 Mar 2020 • Kamran Alipour, Jurgen P. Schulze, Yi Yao, Avi Ziskind, Giedrius Burachas
Explainability and interpretability of AI models is an essential factor affecting the safety of AI.
no code implementations • IJCNLP 2019 • Arijit Ray, Karan Sikka, Ajay Divakaran, Stefan Lee, Giedrius Burachas
For instance, if a model answers "red" to "What color is the balloon?
no code implementations • 5 Apr 2019 • Arijit Ray, Yi Yao, Rakesh Kumar, Ajay Divakaran, Giedrius Burachas
Our experiments, therefore, demonstrate that ExAG is an effective means to evaluate the efficacy of AI-generated explanations on a human-AI collaborative task.
no code implementations • 15 Feb 2019 • Shalini Ghosh, Giedrius Burachas, Arijit Ray, Avi Ziskind
In this paper, we present a novel approach for the task of eXplainable Question Answering (XQA), i. e., generating natural language (NL) explanations for the Visual Question Answering (VQA) problem.