Search Results for author: Kary Främling

Found 12 papers, 3 papers with code

Feature Importance versus Feature Influence and What It Signifies for Explainable AI

no code implementations7 Aug 2023 Kary Främling

The Contextual Importance and Utility (CIU) method provides a unified definition of global and local feature importance that is applicable also for post-hoc explanations, where the value utility concept provides instance-level assessment of how favorable or not a feature value is for the outcome.

Feature Importance

Context, Utility and Influence of an Explanation

no code implementations22 Mar 2023 Minal Suresh Patil, Kary Främling

AI developers can create ethical systems that benefit society by considering contextual factors like societal norms and values.

Decision Making Ethics

Do intermediate feature coalitions aid explainability of black-box models?

no code implementations21 Mar 2023 Minal Suresh Patil, Kary Främling

This work introduces the notion of intermediate concepts based on levels structure to aid explainability for black-box models.

Contextual Importance and Utility: aTheoretical Foundation

1 code implementation15 Feb 2022 Kary Främling

This paper provides new theory to support to the eXplainable AI (XAI) method Contextual Importance and Utility (CIU).

Attribute Explainable Artificial Intelligence (XAI)

Towards a Rigorous Evaluation of Explainability for Multivariate Time Series

no code implementations6 Apr 2021 Rohit Saluja, Avleen Malhi, Samanta Knapič, Kary Främling, Cicek Cavdar

Machine learning-based systems are rapidly gaining popularity and in-line with that there has been a huge research surge in the field of explainability to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process.

BIG-bench Machine Learning Decision Making +4

A Dynamic Battery State-of-Health Forecasting Model for Electric Trucks: Li-Ion Batteries Case-Study

no code implementations30 Mar 2021 Matti Huotari, Shashank Arora, Avleen Malhi, Kary Främling

On the other hand, BAG model results suggest that the developed supervised learning model using decision trees as base estimator yields better forecast accuracy in the presence of large variation in data for one battery.

Cognitive Perspectives on Context-based Decisions and Explanations

no code implementations25 Jan 2021 Marcus Westberg, Kary Främling

When human cognition is modeled in Philosophy and Cognitive Science, there is a pervasive idea that humans employ mental representations in order to navigate the world and make predictions about outcomes of future actions.

Decision Making Explainable Artificial Intelligence (XAI) +2

Explainable AI without Interpretable Model

no code implementations29 Sep 2020 Kary Främling

Especially if the AI system has been trained using Machine Learning, it tends to contain too many parameters for them to be analysed and understood, which has caused them to be called `black-box' systems.

Explainable Artificial Intelligence (XAI) Feature Importance

Explanations of Black-Box Model Predictions by Contextual Importance and Utility

1 code implementation30 May 2020 Sule Anjomshoae, Kary Främling, Amro Najjar

We show the utility of explanations in car selection example and Iris flower classification by presenting complete (i. e. the causes of an individual prediction) and contrastive explanation (i. e. contrasting instance against the instance of interest).

Explainable artificial intelligence

Cannot find the paper you are looking for? You can Submit a new open access paper.