A Practical guide on Explainable AI Techniques applied on Biomedical use case applications

Last years have been characterized by an upsurge of opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although they have great generalization and prediction skills, their functioning does not allow obtaining detailed explanations of their behaviour. As opaque machine learning models are increasingly being employed to make important predictions in critical environments, the danger is to create and use decisions that are not justifiable or legitimate. Therefore, there is a general agreement on the importance of endowing machine learning models with explainability. EXplainable Artificial Intelligence (XAI) techniques can serve to verify and certify model outputs and enhance them with desirable notions such as trustworthiness, accountability, transparency and fairness. This guide is meant to be the go-to handbook for any audience with a computer science background aiming at getting intuitive insights on machine learning models, accompanied with straight, fast, and intuitive explanations out of the box. This article aims to fill the lack of compelling XAI guide by applying XAI techniques in their particular day-to-day models, datasets and use-cases. Figure 1 acts as a flowchart/map for the reader and should help him to find the ideal method to use according to his type of data. In each chapter, the reader will find a description of the proposed method as well as an example of use on a Biomedical application and a Python notebook. It can be easily modified in order to be applied to specific applications.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here