no code implementations • 30 Mar 2023 • Weina Jin, Xiaoxiao Li, Ghassan Hamarneh
Optimizing XAI for plausibility regardless of the model decision correctness also jeopardizes model trustworthiness, because doing so breaks an important assumption in human-human explanation that plausible explanations typically imply correct decisions, and vice versa; and violating this assumption eventually leads to either undertrust or overtrust of AI models.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
1 code implementation • 10 Feb 2023 • Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, Ghassan Hamarneh
The EUCA study findings, the identified explanation forms and goals for technical specification, and the EUCA study dataset support the design and evaluation of end-user-centered XAI techniques for accessible, safe, and accountable AI.
no code implementations • 18 Aug 2022 • Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, Xiaoxiao Li, Ghassan Hamarneh
The boundaries of existing explainable artificial intelligence (XAI) algorithms are confined to problems grounded in technical users' demand for explainability.
1 code implementation • 12 Mar 2022 • Weina Jin, Xiaoxiao Li, Ghassan Hamarneh
The evaluation and MSFI metric can guide the design and selection of XAI algorithms to meet clinical requirements on multi-modal explanation.
Explainable Artificial Intelligence (XAI) Feature Importance
1 code implementation • 16 Feb 2022 • Weina Jin, Xiaoxiao Li, Mostafa Fatehi, Ghassan Hamarneh
Following the guidelines, we conducted a systematic evaluation on a novel problem of multi-modal medical image explanation with two clinical tasks, and proposed new evaluation metrics accordingly.
Computational Efficiency Explainable artificial intelligence +1
no code implementations • 11 Jul 2021 • Weina Jin, Xiaoxiao Li, Ghassan Hamarneh
The maps highlight important features for AI model's prediction.
1 code implementation • 4 Feb 2021 • Weina Jin, Jianyu Fan, Diane Gromala, Philippe Pasquier, Ghassan Hamarneh
The ability to explain decisions to end-users is a necessity to deploy AI as critical decision support.
Decision Making Explainable artificial intelligence Human-Computer Interaction
no code implementations • 28 Nov 2019 • Weina Jin, Mostafa Fatehi, Kumar Abhishek, Mayur Mallya, Brian Toyota, Ghassan Hamarneh
We believe that these technical approaches will facilitate the development of a fully-functional AI tool in the clinical care of patients with gliomas.