Explainable artificial intelligence

203 papers with code • 0 benchmarks • 8 datasets

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Libraries

Use these libraries to find Explainable artificial intelligence models and implementations

Latest papers with no code

Concept Induction using LLMs: a user experiment for assessment

no code yet • 18 Apr 2024

To evaluate the output, we compare the concepts generated by the LLM with two other methods: concepts generated by humans and the ECII heuristic concept induction system.

Explainable Artificial Intelligence Techniques for Accurate Fault Detection and Diagnosis: A Review

no code yet • 17 Apr 2024

As the manufacturing industry advances with sensor integration and automation, the opaque nature of deep learning models in machine learning poses a significant challenge for fault detection and diagnosis.

Explainable Lung Disease Classification from Chest X-Ray Images Utilizing Deep Learning and XAI

no code yet • 17 Apr 2024

Lung diseases remain a critical global health concern, and it's crucial to have accurate and quick ways to diagnose them.

CNN-based explanation ensembling for dataset, representation and explanations evaluation

no code yet • 16 Apr 2024

Explainable Artificial Intelligence has gained significant attention due to the widespread use of complex deep learning models in high-stake domains such as medicine, finance, and autonomous cars.

Reactive Model Correction: Mitigating Harm to Task-Relevant Features via Conditional Bias Suppression

no code yet • 15 Apr 2024

Deep Neural Networks are prone to learning and relying on spurious correlations in the training data, which, for high-risk applications, can have fatal consequences.

Beyond One-Size-Fits-All: Adapting Counterfactual Explanations to User Objectives

no code yet • 12 Apr 2024

Explainable Artificial Intelligence (XAI) has emerged as a critical area of research aimed at enhancing the transparency and interpretability of AI systems.

Unraveling the Dilemma of AI Errors: Exploring the Effectiveness of Human and Machine Explanations for Large Language Models

no code yet • 11 Apr 2024

The field of eXplainable artificial intelligence (XAI) has produced a plethora of methods (e. g., saliency-maps) to gain insight into artificial intelligence (AI) models, and has exploded with the rise of deep learning (DL).

Concept-Attention Whitening for Interpretable Skin Lesion Diagnosis

no code yet • 9 Apr 2024

In the former branch, we train the CNN with a CAW layer inserted to perform skin lesion diagnosis.

Enhancing Breast Cancer Diagnosis in Mammography: Evaluation and Integration of Convolutional Neural Networks and Explainable AI

no code yet • 5 Apr 2024

The study introduces an integrated framework combining Convolutional Neural Networks (CNNs) and Explainable Artificial Intelligence (XAI) for the enhanced diagnosis of breast cancer using the CBIS-DDSM dataset.

Comprehensible Artificial Intelligence on Knowledge Graphs: A survey

no code yet • 4 Apr 2024

Thus, we provide in this survey a case for Comprehensible Artificial Intelligence on Knowledge Graphs consisting of Interpretable Machine Learning on Knowledge Graphs and Explainable Artificial Intelligence on Knowledge Graphs.