no code implementations • 7 Aug 2023 • Kary Främling
The Contextual Importance and Utility (CIU) method provides a unified definition of global and local feature importance that is applicable also for post-hoc explanations, where the value utility concept provides instance-level assessment of how favorable or not a feature value is for the outcome.
no code implementations • 22 Mar 2023 • Minal Suresh Patil, Kary Främling
AI developers can create ethical systems that benefit society by considering contextual factors like societal norms and values.
no code implementations • 21 Mar 2023 • Minal Suresh Patil, Kary Främling
This work introduces the notion of intermediate concepts based on levels structure to aid explainability for black-box models.
1 code implementation • 15 Feb 2022 • Kary Främling
This paper provides new theory to support to the eXplainable AI (XAI) method Contextual Importance and Utility (CIU).
no code implementations • 6 Jul 2021 • Matti Huotari, Shashank Arora, Avleen Malhi, Kary Främling
For this work, we are in possession of a unique data set of 45 lithium-ion battery packs with large variation in the data.
no code implementations • 5 May 2021 • Samanta Knapič, Avleen Malhi, Rohit Salujaa, Kary Främling
We conducted three user studies based on the explanations provided by LIME, SHAP and CIU.
no code implementations • 6 Apr 2021 • Rohit Saluja, Avleen Malhi, Samanta Knapič, Kary Främling, Cicek Cavdar
Machine learning-based systems are rapidly gaining popularity and in-line with that there has been a huge research surge in the field of explainability to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process.
no code implementations • 30 Mar 2021 • Matti Huotari, Shashank Arora, Avleen Malhi, Kary Främling
On the other hand, BAG model results suggest that the developed supervised learning model using decision trees as base estimator yields better forecast accuracy in the presence of large variation in data for one battery.
no code implementations • 25 Jan 2021 • Marcus Westberg, Kary Främling
When human cognition is modeled in Philosophy and Cognitive Science, there is a pervasive idea that humans employ mental representations in order to navigate the world and make predictions about outcomes of future actions.
Decision Making Explainable Artificial Intelligence (XAI) +2
1 code implementation • 17 Dec 2020 • Nazanin Fouladgar, Kary Främling
In this work, we report the practical and theoretical aspects of Explainable AI (XAI) identified in some fundamental literature.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
no code implementations • 29 Sep 2020 • Kary Främling
Especially if the AI system has been trained using Machine Learning, it tends to contain too many parameters for them to be analysed and understood, which has caused them to be called `black-box' systems.
Explainable Artificial Intelligence (XAI) Feature Importance
1 code implementation • 30 May 2020 • Sule Anjomshoae, Kary Främling, Amro Najjar
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i. e. the causes of an individual prediction) and contrastive explanation (i. e. contrasting instance against the instance of interest).