1 code implementation • 15 Feb 2022 • Kary Främling
This paper provides new theory to support to the eXplainable AI (XAI) method Contextual Importance and Utility (CIU).
no code implementations • 6 Jul 2021 • Matti Huotari, Shashank Arora, Avleen Malhi, Kary Främling
For this work, we are in possession of a unique data set of 45 lithium-ion battery packs with large variation in the data.
no code implementations • 5 May 2021 • Samanta Knapič, Avleen Malhi, Rohit Salujaa, Kary Främling
We conducted three user studies based on the explanations provided by LIME, SHAP and CIU.
no code implementations • 6 Apr 2021 • Rohit Saluja, Avleen Malhi, Samanta Knapič, Kary Främling, Cicek Cavdar
Machine learning-based systems are rapidly gaining popularity and in-line with that there has been a huge research surge in the field of explainability to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process.
no code implementations • 30 Mar 2021 • Matti Huotari, Shashank Arora, Avleen Malhi, Kary Främling
On the other hand, BAG model results suggest that the developed supervised learning model using decision trees as base estimator yields better forecast accuracy in the presence of large variation in data for one battery.
no code implementations • 25 Jan 2021 • Marcus Westberg, Kary Främling
When human cognition is modeled in Philosophy and Cognitive Science, there is a pervasive idea that humans employ mental representations in order to navigate the world and make predictions about outcomes of future actions.
1 code implementation • 17 Dec 2020 • Nazanin Fouladgar, Kary Främling
In this work, we report the practical and theoretical aspects of Explainable AI (XAI) identified in some fundamental literature.
no code implementations • 29 Sep 2020 • Kary Främling
Especially if the AI system has been trained using Machine Learning, it tends to contain too many parameters for them to be analysed and understood, which has caused them to be called `black-box' systems.
1 code implementation • 30 May 2020 • Sule Anjomshoae, Kary Främling, Amro Najjar
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i. e. the causes of an individual prediction) and contrastive explanation (i. e. contrasting instance against the instance of interest).