no code implementations • 11 Feb 2023 • Shruthi Chari, Prasant Acharya, Daniel M. Gruen, Olivia Zhang, Elif K. Eyigoz, Mohamed Ghalwash, Oshani Seneviratne, Fernando Suarez Saiz, Pablo Meyer, Prithwish Chakraborty, Deborah L. McGuinness
All of these steps were performed in engagement with medical experts, including a final evaluation of the dashboard results by an expert medical panel.
no code implementations • 6 Jul 2021 • Shruthi Chari, Prithwish Chakraborty, Mohamed Ghalwash, Oshani Seneviratne, Elif K. Eyigoz, Daniel M. Gruen, Fernando Suarez Saiz, Ching-Hua Chen, Pablo Meyer Rojas, Deborah L. McGuinness
To enable the adoption of the ever improving AI risk prediction models in practice, we have begun to explore techniques to contextualize such models along three dimensions of interest: the patients' clinical state, AI predictions about their risk of complications, and algorithmic explanations supporting the predictions.
no code implementations • 4 May 2021 • Ishita Padhiar, Oshani Seneviratne, Shruthi Chari, Daniel Gruen, Deborah L. McGuinness
Our motivation with the use of FEO is to empower users to make decisions about their health, fully equipped with an understanding of the AI recommender systems as they relate to user questions, by providing reasoning behind their recommendations in the form of explanations.
no code implementations • 4 Oct 2020 • Shruthi Chari, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman, Amar K. Das, Deborah L. McGuinness
We addressed the problem of a lack of semantic representation for user-centric explanations and different explanation types in our Explanation Ontology (https://purl. org/heals/eo).
no code implementations • 4 Oct 2020 • Shruthi Chari, Oshani Seneviratne, Daniel M. Gruen, Morgan A. Foreman, Amar K. Das, Deborah L. McGuinness
With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration, mapping end user needs to specific explanation types and the system's AI capabilities.
no code implementations • 17 Mar 2020 • Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L. McGuinness
Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.
no code implementations • 17 Mar 2020 • Shruthi Chari, Daniel M. Gruen, Oshani Seneviratne, Deborah L. McGuinness
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently.
no code implementations • 9 Jul 2019 • Shruthi Chari, Miao Qi, Nkcheniyere N. Agu, Oshani Seneviratne, James P. McCusker, Kristin P. Bennett, Amar K. Das, Deborah L. McGuinness
To address these challenges, we develop an ontology-enabled prototype system, which exposes the population descriptions in research studies in a declarative manner, with the ultimate goal of allowing medical practitioners to better understand the applicability and generalizability of treatment recommendations.
no code implementations • 20 Jul 2018 • Oshani Seneviratne, Sabbir M. Rashid, Shruthi Chari, James P. McCusker, Kristin P. Bennett, James A. Hendler, Deborah L. McGuinness
With the rapid advancements in cancer research, the information that is useful for characterizing disease, staging tumors, and creating treatment and survivorship plans has been changing at a pace that creates challenges when physicians try to remain current.