Search Results for author: Andreas Holzinger

Found 25 papers, 8 papers with code

Be Careful When Evaluating Explanations Regarding Ground Truth

1 code implementation8 Nov 2023 Hubert Baniecki, Maciej Chrabaszcz, Andreas Holzinger, Bastian Pfeifer, Anna Saranti, Przemyslaw Biecek

Evaluating explanations of image classifiers regarding ground truth, e. g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves.

Explaining and visualizing black-box models through counterfactual paths

1 code implementation15 Jul 2023 Bastian Pfeifer, Mateusz Krzyzinski, Hubert Baniecki, Anna Saranti, Andreas Holzinger, Przemyslaw Biecek

Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable.

counterfactual Explainable Artificial Intelligence (XAI) +2

Ethical ChatGPT: Concerns, Challenges, and Commandments

no code implementations18 May 2023 Jianlong Zhou, Heimo Müller, Andreas Holzinger, Fang Chen

Large language models, e. g. ChatGPT are currently contributing enormously to make artificial intelligence even more popular, especially among the general population.

Chatbot

Exploring the Trade-off between Plausibility, Change Intensity and Adversarial Power in Counterfactual Explanations using Multi-objective Optimization

1 code implementation20 May 2022 Javier Del Ser, Alejandro Barredo-Arrieta, Natalia Díaz-Rodríguez, Francisco Herrera, Andreas Holzinger

To this end, we present a novel framework for the generation of counterfactual examples which formulates its goal as a multi-objective optimization problem balancing three different objectives: 1) plausibility, i. e., the likeliness of the counterfactual of being possible as per the distribution of the input data; 2) intensity of the changes to the original input; and 3) adversarial power, namely, the variability of the model's output induced by the counterfactual.

counterfactual Generative Adversarial Network

Graph-guided random forest for gene set selection

1 code implementation26 Aug 2021 Bastian Pfeifer, Hubert Baniecki, Anna Saranti, Przemyslaw Biecek, Andreas Holzinger

To demonstrate a concrete application example, we focus on bioinformatics, systems biology and particularly biomedicine, but the presented methodology is applicable in many other domains as well.

KANDINSKYPatterns -- An experimental exploration environment for Pattern Analysis and Machine Intelligence

no code implementations28 Feb 2021 Andreas Holzinger, Anna Saranti, Heimo Mueller

Machine intelligence is very successful at standard recognition tasks when having high-quality training data.

Measuring the Quality of Explanations: The System Causability Scale (SCS). Comparing Human and Machine Explanations

no code implementations19 Dec 2019 Andreas Holzinger, André Carrington, Heimo Müller

In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system.

Decision Making Explainable Artificial Intelligence (XAI)

Performing Arithmetic Using a Neural Network Trained on Digit Permutation Pairs

no code implementations6 Dec 2019 Marcus D. Bloice, Peter M. Roth, Andreas Holzinger

In this paper a neural network is trained to perform simple arithmetic using images of concatenated handwritten digit pairs.

Patch augmentation: Towards efficient decision boundaries for neural networks

1 code implementation8 Nov 2019 Marcus D. Bloice, Peter M. Roth, Andreas Holzinger

In this paper we propose a new augmentation technique, called patch augmentation, that, in our experiments, improves model accuracy and makes networks more robust to adversarial attacks.

Adversarial Attack

Kandinsky Patterns

1 code implementation3 Jun 2019 Heimo Mueller, Andreas Holzinger

Kandinsky Figures and Kandinsky Patterns are mathematically describable, simple self-contained hence controllable test data sets for the development, validation and training of explainability in artificial intelligence.

Human Activity Recognition using Recurrent Neural Networks

no code implementations19 Apr 2018 Deepika Singh, Erinc Merdivan, Ismini Psychoula, Johannes Kropf, Sten Hanke, Matthieu Geist, Andreas Holzinger

Human activity recognition using smart home sensors is one of the bases of ubiquitous computing in smart environments and a topic undergoing intense research in the field of ambient assisted living.

BIG-bench Machine Learning Human Activity Recognition

A Deep Learning Approach for Privacy Preservation in Assisted Living

no code implementations22 Feb 2018 Ismini Psychoula, Erinc Merdivan, Deepika Singh, Liming Chen, Feng Chen, Sten Hanke, Johannes Kropf, Andreas Holzinger, Matthieu Geist

In the era of Internet of Things (IoT) technologies the potential for privacy invasion is becoming a major concern especially in regards to healthcare data and Ambient Assisted Living (AAL) environments.

The Need for Speed of AI Applications: Performance Comparison of Native vs. Browser-based Algorithm Implementations

no code implementations11 Feb 2018 Bernd Malle, Nicola Giuliani, Peter Kieseberg, Andreas Holzinger

AI applications pose increasing demands on performance, so it is not surprising that the era of client-side distributed software is becoming important.

Computational Efficiency

What do we need to build explainable AI systems for the medical domain?

no code implementations28 Dec 2017 Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis, Douglas B. Kell

In this paper we outline some of our research topics in the context of the relatively new area of explainable-AI with a focus on the application in medicine, which is a very special domain.

Autonomous Driving Game of Go +3

Towards the Augmented Pathologist: Challenges of Explainable-AI in Digital Pathology

no code implementations18 Dec 2017 Andreas Holzinger, Bernd Malle, Peter Kieseberg, Peter M. Roth, Heimo Müller, Robert Reihs, Kurt Zatloukal

The foundation of such an "augmented pathologist" needs an integrated approach: While machine learning algorithms require many thousands of training examples, a human expert is often confronted with only a few data points.

BIG-bench Machine Learning

Augmentor: An Image Augmentation Library for Machine Learning

6 code implementations11 Aug 2017 Marcus D. Bloice, Christof Stocker, Andreas Holzinger

The generation of artificial data based on existing observations, known as data augmentation, is a technique used in machine learning to improve model accuracy, generalisation, and to control overfitting.

BIG-bench Machine Learning Image Augmentation

Cannot find the paper you are looking for? You can Submit a new open access paper.