This paper provides new theory to support to the eXplainable AI (XAI) method Contextual Importance and Utility (CIU).
For this work, we are in possession of a unique data set of 45 lithium-ion battery packs with large variation in the data.
We conducted three user studies based on the explanations provided by LIME, SHAP and CIU.
Machine learning-based systems are rapidly gaining popularity and in-line with that there has been a huge research surge in the field of explainability to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process.
On the other hand, BAG model results suggest that the developed supervised learning model using decision trees as base estimator yields better forecast accuracy in the presence of large variation in data for one battery.
When human cognition is modeled in Philosophy and Cognitive Science, there is a pervasive idea that humans employ mental representations in order to navigate the world and make predictions about outcomes of future actions.
In this work, we report the practical and theoretical aspects of Explainable AI (XAI) identified in some fundamental literature.
Especially if the AI system has been trained using Machine Learning, it tends to contain too many parameters for them to be analysed and understood, which has caused them to be called `black-box' systems.
We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i. e. the causes of an individual prediction) and contrastive explanation (i. e. contrasting instance against the instance of interest).