Search Results for author: Kary Främling

Found 9 papers, 3 papers with code

Contextual Importance and Utility: aTheoretical Foundation

1 code implementation15 Feb 2022 Kary Främling

This paper provides new theory to support to the eXplainable AI (XAI) method Contextual Importance and Utility (CIU).

Towards a Rigorous Evaluation of Explainability for Multivariate Time Series

no code implementations6 Apr 2021 Rohit Saluja, Avleen Malhi, Samanta Knapič, Kary Främling, Cicek Cavdar

Machine learning-based systems are rapidly gaining popularity and in-line with that there has been a huge research surge in the field of explainability to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process.

Decision Making Explainable artificial intelligence +2

A Dynamic Battery State-of-Health Forecasting Model for Electric Trucks: Li-Ion Batteries Case-Study

no code implementations30 Mar 2021 Matti Huotari, Shashank Arora, Avleen Malhi, Kary Främling

On the other hand, BAG model results suggest that the developed supervised learning model using decision trees as base estimator yields better forecast accuracy in the presence of large variation in data for one battery.

Cognitive Perspectives on Context-based Decisions and Explanations

no code implementations25 Jan 2021 Marcus Westberg, Kary Främling

When human cognition is modeled in Philosophy and Cognitive Science, there is a pervasive idea that humans employ mental representations in order to navigate the world and make predictions about outcomes of future actions.

Decision Making

XAI-P-T: A Brief Review of Explainable Artificial Intelligence from Practice to Theory

1 code implementation17 Dec 2020 Nazanin Fouladgar, Kary Främling

In this work, we report the practical and theoretical aspects of Explainable AI (XAI) identified in some fundamental literature.

Explainable artificial intelligence

Explainable AI without Interpretable Model

no code implementations29 Sep 2020 Kary Främling

Especially if the AI system has been trained using Machine Learning, it tends to contain too many parameters for them to be analysed and understood, which has caused them to be called `black-box' systems.

Feature Importance

Explanations of Black-Box Model Predictions by Contextual Importance and Utility

1 code implementation30 May 2020 Sule Anjomshoae, Kary Främling, Amro Najjar

We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i. e. the causes of an individual prediction) and contrastive explanation (i. e. contrasting instance against the instance of interest).

Explainable artificial intelligence

Cannot find the paper you are looking for? You can Submit a new open access paper.