no code implementations • 16 Mar 2024 • Qilong Zhao, Yifei Zhang, Mengdan Zhu, Siyi Gu, Yuyang Gao, Xiaofeng Yang, Liang Zhao
Explanation supervision aims to enhance deep learning models by integrating additional signals to guide the generation of model explanations, showcasing notable improvements in both the predictability and explainability of the model.
no code implementations • 12 Oct 2023 • Yifei Zhang, Siyi Gu, James Song, Bo Pan, Guangji Bai, Liang Zhao
Our proposed benchmarks facilitate a fair evaluation and comparison of visual explanation methods.
no code implementations • 12 Oct 2023 • Yifei Zhang, Siyi Gu, Bo Pan, Guangji Bai, Xiaofeng Yang, Liang Zhao
To tackle these challenges, we propose a novel framework called Visual Attention-Prompted Prediction and Learning, which seamlessly integrates visual attention prompts into the model's decision-making process and adapts to images both with and without attention prompts for prediction.
no code implementations • ICCV 2023 • Yifei Zhang, Siyi Gu, Yuyang Gao, Bo Pan, Xiaofeng Yang, Liang Zhao
This technique aims to improve the predictability of the model by incorporating human understanding of the prediction process into the training phase.
no code implementations • 7 Dec 2022 • Yuyang Gao, Siyi Gu, Junji Jiang, Sungsoo Ray Hong, Dazhou Yu, Liang Zhao
As the societal impact of Deep Neural Networks (DNNs) grows, the goals for advancing DNNs become more complex and diverse, ranging from improving a conventional model accuracy metric to infusing advanced human virtues such as fairness, accountability, transparency (FaccT), and unbiasedness.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
1 code implementation • 27 Jun 2022 • Yuyang Gao, Tong Steven Sun, Guangji Bai, Siyi Gu, Sungsoo Ray Hong, Liang Zhao
Despite the fast progress of explanation techniques in modern Deep Neural Networks (DNNs) where the main focus is handling "how to generate the explanations", advanced research questions that examine the quality of the explanation itself (e. g., "whether the explanations are accurate") and improve the explanation quality (e. g., "how to adjust the model to generate more accurate explanations when explanations are inaccurate") are still relatively under-explored.