1 code implementation • 8 Nov 2023 • Hubert Baniecki, Maciej Chrabaszcz, Andreas Holzinger, Bastian Pfeifer, Anna Saranti, Przemyslaw Biecek
Evaluating explanations of image classifiers regarding ground truth, e. g. segmentation masks defined by human perception, primarily evaluates the quality of the models under consideration rather than the explanation methods themselves.
no code implementations • 30 Oct 2023 • Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith, Simone Stumpf
As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 15 Jul 2023 • Bastian Pfeifer, Mateusz Krzyzinski, Hubert Baniecki, Anna Saranti, Andreas Holzinger, Przemyslaw Biecek
Explainable AI (XAI) is an increasingly important area of machine learning research, which aims to make black-box models transparent and interpretable.
no code implementations • 18 May 2023 • Jianlong Zhou, Heimo Müller, Andreas Holzinger, Fang Chen
Large language models, e. g. ChatGPT are currently contributing enormously to make artificial intelligence even more popular, especially among the general population.
1 code implementation • 20 May 2022 • Javier Del Ser, Alejandro Barredo-Arrieta, Natalia Díaz-Rodríguez, Francisco Herrera, Andreas Holzinger
To this end, we present a novel framework for the generation of counterfactual examples which formulates its goal as a multi-objective optimization problem balancing three different objectives: 1) plausibility, i. e., the likeliness of the counterfactual of being possible as per the distribution of the input data; 2) intensity of the changes to the original input; and 3) adversarial power, namely, the variability of the model's output induced by the counterfactual.
no code implementations • 13 Nov 2021 • Adrien Bennetot, Ivan Donadello, Ayoub El Qadi, Mauro Dragoni, Thomas Frossard, Benedikt Wagner, Anna Saranti, Silvia Tulli, Maria Trocan, Raja Chatila, Andreas Holzinger, Artur d'Avila Garcez, Natalia Díaz-Rodríguez
Last years have been characterized by an upsurge of opaque automatic decision support systems, such as Deep Neural Networks (DNNs).
BIG-bench Machine Learning Explainable artificial intelligence +2
1 code implementation • 26 Aug 2021 • Bastian Pfeifer, Hubert Baniecki, Anna Saranti, Przemyslaw Biecek, Andreas Holzinger
To demonstrate a concrete application example, we focus on bioinformatics, systems biology and particularly biomedicine, but the presented methodology is applicable in many other domains as well.
1 code implementation • 12 May 2021 • Julian Matschinske, Julian Späth, Reza Nasirigerdeh, Reihaneh Torkzadehmahani, Anne Hartebrodt, Balázs Orbán, Sándor Fejér, Olga Zolotareva, Mohammad Bakhtiari, Béla Bihari, Marcus Bloice, Nina C Donner, Walid Fdhila, Tobias Frisch, Anne-Christin Hauschild, Dominik Heider, Andreas Holzinger, Walter Hötzendorfer, Jan Hospes, Tim Kacprowski, Markus Kastelitz, Markus List, Rudolf Mayer, Mónika Moga, Heimo Müller, Anastasia Pustozerova, Richard Röttger, Anna Saranti, Harald HHW Schmidt, Christof Tschohl, Nina K Wenke, Jan Baumbach
Machine Learning (ML) and Artificial Intelligence (AI) have shown promising results in many areas and are driven by the increasing amount of available data.
no code implementations • 21 Mar 2021 • André M. Carrington, Douglas G. Manuel, Paul W. Fieguth, Tim Ramsay, Venet Osmani, Bernhard Wernly, Carol Bennett, Steven Hawken, Matthew McInnes, Olivia Magwood, Yusuf Sheikh, Andreas Holzinger
We demonstrate deep ROC analysis in two case studies and provide a toolkit in Python.
no code implementations • 28 Feb 2021 • Andreas Holzinger, Anna Saranti, Heimo Mueller
Machine intelligence is very successful at standard recognition tasks when having high-quality training data.
no code implementations • 25 Nov 2020 • Ellery Wulczyn, Kunal Nagpal, Matthew Symonds, Melissa Moran, Markus Plass, Robert Reihs, Farah Nader, Fraser Tan, Yuannan Cai, Trissia Brown, Isabelle Flament-Auvigne, Mahul B. Amin, Martin C. Stumpe, Heimo Muller, Peter Regitnig, Andreas Holzinger, Greg S. Corrado, Lily H. Peng, Po-Hsuan Cameron Chen, David F. Steiner, Kurt Zatloukal, Yun Liu, Craig H. Mermel
's C-indices were 0. 87 and 0. 85 for continuous and discrete grading, respectively, compared to 0. 79 (95%CI 0. 71-0. 86) for GG obtained from the reports.
no code implementations • 22 Jul 2020 • Reihaneh Torkzadehmahani, Reza Nasirigerdeh, David B. Blumenthal, Tim Kacprowski, Markus List, Julian Matschinske, Julian Späth, Nina Kerstin Wenke, Béla Bihari, Tobias Frisch, Anne Hartebrodt, Anne-Christin Hausschild, Dominik Heider, Andreas Holzinger, Walter Hötzendorfer, Markus Kastelitz, Rudolf Mayer, Cristian Nogales, Anastasia Pustozerova, Richard Röttger, Harald H. H. W. Schmidt, Ameli Schwalber, Christof Tschohl, Andrea Wohner, Jan Baumbach
Artificial intelligence (AI) has been successfully applied in numerous scientific domains.
no code implementations • 19 Dec 2019 • Andreas Holzinger, André Carrington, Heimo Müller
In order to build effective and efficient interactive human-AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system.
no code implementations • 6 Dec 2019 • Marcus D. Bloice, Peter M. Roth, Andreas Holzinger
In this paper a neural network is trained to perform simple arithmetic using images of concatenated handwritten digit pairs.
1 code implementation • 8 Nov 2019 • Marcus D. Bloice, Peter M. Roth, Andreas Holzinger
In this paper we propose a new augmentation technique, called patch augmentation, that, in our experiments, improves model accuracy and makes networks more robust to adversarial attacks.
1 code implementation • 3 Jun 2019 • Heimo Mueller, Andreas Holzinger
Kandinsky Figures and Kandinsky Patterns are mathematically describable, simple self-contained hence controllable test data sets for the development, validation and training of explainability in artificial intelligence.
no code implementations • 19 Apr 2018 • Deepika Singh, Erinc Merdivan, Ismini Psychoula, Johannes Kropf, Sten Hanke, Matthieu Geist, Andreas Holzinger
Human activity recognition using smart home sensors is one of the bases of ubiquitous computing in smart environments and a topic undergoing intense research in the field of ambient assisted living.
no code implementations • 22 Feb 2018 • Ismini Psychoula, Erinc Merdivan, Deepika Singh, Liming Chen, Feng Chen, Sten Hanke, Johannes Kropf, Andreas Holzinger, Matthieu Geist
In the era of Internet of Things (IoT) technologies the potential for privacy invasion is becoming a major concern especially in regards to healthcare data and Ambient Assisted Living (AAL) environments.
no code implementations • 11 Feb 2018 • Bernd Malle, Nicola Giuliani, Peter Kieseberg, Andreas Holzinger
AI applications pose increasing demands on performance, so it is not surprising that the era of client-side distributed software is becoming important.
no code implementations • 28 Dec 2017 • Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis, Douglas B. Kell
In this paper we outline some of our research topics in the context of the relatively new area of explainable-AI with a focus on the application in medicine, which is a very special domain.
no code implementations • 18 Dec 2017 • Andreas Holzinger, Bernd Malle, Peter Kieseberg, Peter M. Roth, Heimo Müller, Robert Reihs, Kurt Zatloukal
The foundation of such an "augmented pathologist" needs an integrated approach: While machine learning algorithms require many thousands of training examples, a human expert is often confronted with only a few data points.
no code implementations • RANLP 2017 • Seid Muhie Yimam, Steffen Remus, Alex Panchenko, er, Andreas Holzinger, Chris Biemann
In this paper, we describe the concept of entity-centric information access for the biomedical domain.
6 code implementations • 11 Aug 2017 • Marcus D. Bloice, Christof Stocker, Andreas Holzinger
The generation of artificial data based on existing observations, known as data augmentation, is a technique used in machine learning to improve model accuracy, generalisation, and to control overfitting.
no code implementations • 3 Aug 2017 • Andreas Holzinger, Markus Plass, Katharina Holzinger, Gloria Cerasela Crisan, Camelia-M. Pintea, Vasile Palade
The goal of Machine Learning to automatically learn from data, extract knowledge and to make decisions without any human intervention.
no code implementations • 28 Jul 2017 • Irina Kuznetsova, Yuliya V Karpievitch, Aleksandra Filipovska, Artur Lugmayr, Andreas Holzinger
In biological research machine learning algorithms are part of nearly every analytical process.