Explainable artificial intelligence

206 papers with code • 0 benchmarks • 8 datasets

XAI refers to methods and techniques in the application of artificial intelligence (AI) such that the results of the solution can be understood by humans. It contrasts with the concept of the "black box" in machine learning where even its designers cannot explain why an AI arrived at a specific decision. XAI may be an implementation of the social right to explanation. XAI is relevant even if there is no legal right or regulatory requirement—for example, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions. This way the aim of XAI is to explain what has been done, what is done right now, what will be done next and unveil the information the actions are based on. These characteristics make it possible (i) to confirm existing knowledge (ii) to challenge existing knowledge and (iii) to generate new assumptions.

Libraries

Use these libraries to find Explainable artificial intelligence models and implementations

Multi-Excitation Projective Simulation with a Many-Body Physics Inspired Inductive Bias

mariuskrumm/manybodymeps 15 Feb 2024

To overcome this limitation, we introduce Multi-Excitation Projective Simulation (mePS), a generalization that considers a chain-of-thought to be a random walk of several particles on a hypergraph.

1
15 Feb 2024

Detecting mental disorder on social media: a ChatGPT-augmented explainable approach

scalabunical/bert-xdd 30 Jan 2024

In the digital era, the prevalence of depressive symptoms expressed on social media has raised serious concerns, necessitating advanced methodologies for timely detection.

2
30 Jan 2024

NormEnsembleXAI: Unveiling the Strengths and Weaknesses of XAI Ensemble Techniques

hryniewska/ensemblexai 30 Jan 2024

This paper presents a comprehensive comparative analysis of explainable artificial intelligence (XAI) ensembling methods.

1
30 Jan 2024

Deep Learning for Gamma-Ray Bursts: A data driven event framework for X/Gamma-Ray analysis in space telescopes

rcrupi/deepgrb 28 Jan 2024

This thesis comprises the first three chapters dedicated to providing an overview of Gamma Ray-Bursts (GRBs), their properties, the instrumentation used to detect them, and Artificial Intelligence (AI) applications in the context of GRBs, including a literature review and future prospects.

9
28 Jan 2024

Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles

mmschlk/treeshap-iq 22 Jan 2024

While shallow decision trees may be interpretable, larger ensemble models like gradient-boosted trees, which often set the state of the art in machine learning problems involving tabular data, still remain black box models.

2
22 Jan 2024

Word-Level ASR Quality Estimation for Efficient Corpus Sampling and Post-Editing through Analyzing Attentions of a Reference-Free Metric

aixplain/NoRefER 20 Jan 2024

The findings suggest that NoRefER is not merely a tool for error detection but also a comprehensive framework for enhancing ASR systems' transparency, efficiency, and effectiveness.

12
20 Jan 2024

MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level Image-Concept Alignment

tommy-bie/mica 16 Jan 2024

Black-box deep learning approaches have showcased significant potential in the realm of medical image analysis.

6
16 Jan 2024

Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test

annahedstroem/sanity-checks-revisited 12 Jan 2024

The Model Parameter Randomisation Test (MPRT) is widely acknowledged in the eXplainable Artificial Intelligence (XAI) community for its well-motivated evaluative principle: that the explanation function should be sensitive to changes in the parameters of the model function.

24
12 Jan 2024

Explainable artificial intelligence approaches for brain-computer interfaces: a review and design space

miilab-iitgn/xai4bci 20 Dec 2023

We propose a design space for XAI4BCI, considering the evolving need to visualize and investigate predictive model outcomes customised for various stakeholders in the BCI development and deployment lifecycle.

1
20 Dec 2023

An Interpretable Deep Learning Approach for Skin Cancer Categorization

faysal-md/an-interpretable-deep-learning-approach-for-skin-cancer-categorization-ieee2023 17 Dec 2023

Our models decision-making process can be clarified because of the implementation of explainable artificial intelligence (XAI).

4
17 Dec 2023