1 code implementation • ICML 2020 • Fabian Hinder, André Artelt, CITEC Barbara Hammer
The notion of concept drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time; as a consequence machine learning models may become inaccurate and need adjustment.
1 code implementation • 2 Mar 2024 • André Artelt, Andreas Gregoriades
Counterfactual explanations constitute among the most popular methods for analyzing the predictions of black-box systems since they can recommend cost-efficient and actionable changes to the input to turn an undesired system's output into a desired output.
1 code implementation • 13 Feb 2024 • André Artelt, Shubham Sharma, Freddy Lecué, Barbara Hammer
Counterfactual explanations provide a popular method for analyzing the predictions of black-box systems, and they can offer the opportunity for computational recourse by suggesting actionable changes on how to change the input to obtain a different (i. e. more favorable) system output.
1 code implementation • 13 Jun 2023 • Ulrike Kuhl, André Artelt, Barbara Hammer
However, potential benefits and drawbacks of the directionality of CFEs for user behavior in xAI remain unclear.
1 code implementation • 25 May 2023 • Paul Stahlhofen, André Artelt, Luca Hermes, Barbara Hammer
Many Machine Learning models are vulnerable to adversarial attacks: There exist methodologies that add a small (imperceptible) perturbation to an input such that the model comes up with a wrong prediction.
no code implementations • 8 Mar 2023 • André Artelt, Andreas Gregoriades
Employee attrition is an important and complex problem that can directly affect an organisation's competitiveness and performance.
1 code implementation • 1 Dec 2022 • Riza Velioglu, Jan Philip Göpfert, André Artelt, Barbara Hammer
On the other hand, Machine Learning (ML) benefits from the vast amount of data available and can deal with high-dimensional sources, yet it has rarely been applied to being used in processes.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI)
1 code implementation • 27 Nov 2022 • André Artelt, Barbara Hammer
Counterfactual explanations are a popular type of explanation for making the outcomes of a decision making system transparent to the user.
1 code implementation • 23 Nov 2022 • André Artelt, Kleanthis Malialis, Christos Panayiotou, Marios Polycarpou, Barbara Hammer
Consequently, learning models operating on the data stream might become obsolete, and need costly and difficult adjustments such as retraining or adaptation.
1 code implementation • 17 Nov 2022 • Inaam Ashraf, Luca Hermes, André Artelt, Barbara Hammer
We investigate the task of missing value estimation in graphs as given by water distribution systems (WDS) based on sparse signals as a representative machine learning challenge in the domain of critical infrastructure.
2 code implementations • 5 Jul 2022 • André Artelt, Barbara Hammer
In this work, we propose to explain rejects by semifactual explanations, an instance of example-based explanation methods, which them self have not been widely considered in the XAI community yet.
1 code implementation • 15 Jun 2022 • André Artelt, Alexander Schulz, Barbara Hammer
Dimensionality reduction is a popular preprocessing and a widely used tool in data mining.
1 code implementation • 18 May 2022 • André Artelt, Stelios Vrachimis, Demetrios Eliades, Marios Polycarpou, Barbara Hammer
Transparency is a major requirement of modern AI based decision making systems deployed in real world.
1 code implementation • 16 May 2022 • André Artelt, Roel Visser, Barbara Hammer
The application of machine learning based decision making systems in safety critical areas requires reliable high certainty predictions.
no code implementations • 13 May 2022 • Fabian Hinder, André Artelt, Valerie Vaquet, Barbara Hammer
The notion of concept drift refers to the phenomenon that the data generating distribution changes over time; as a consequence machine learning models may become inaccurate and need adjustment.
1 code implementation • 11 May 2022 • Ulrike Kuhl, André Artelt, Barbara Hammer
Following the view of psychological plausibility as comparative similarity, this may be explained by the fact that users in the closest condition experience their CFEs as more psychologically plausible than the computationally plausible counterpart.
1 code implementation • 6 May 2022 • Ulrike Kuhl, André Artelt, Barbara Hammer
Thus, to advance the field of XAI, we introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework.
1 code implementation • 4 Apr 2022 • Jonathan Jakob, André Artelt, Martina Hasenjäger, Barbara Hammer
In this work, we propose an adaption of the incremental SAM-kNN classifier for regression to build a residual based anomaly detection system for water distribution networks that is able to adapt to any kind of change.
1 code implementation • 15 Feb 2022 • André Artelt, Johannes Brinkrolf, Roel Visser, Barbara Hammer
While machine learning models are usually assumed to always output a prediction, there also exist extensions in the form of reject options which allow the model to reject inputs where only a prediction with an unacceptably low certainty would be possible.
1 code implementation • 17 May 2021 • André Artelt, Barbara Hammer
Transparency is an essential requirement of machine learning based decision making systems that are deployed in real world.
1 code implementation • 6 Apr 2021 • André Artelt, Fabian Hinder, Valerie Vaquet, Robert Feldhans, Barbara Hammer
We also propose a method for automatically finding regions in data space that are affected by a given model adaptation and thus should be explained.
1 code implementation • 3 Mar 2021 • André Artelt, Valerie Vaquet, Riza Velioglu, Fabian Hinder, Johannes Brinkrolf, Malte Schilling, Barbara Hammer
Counterfactual explanations explain a behavior to the user by proposing actions -- as changes to the input -- that would cause a different (specified) behavior of the system.
1 code implementation • 6 Oct 2020 • André Artelt, Barbara Hammer
With the increasing deployment of machine learning systems in practice, transparency and explainability have become serious issues.
1 code implementation • 12 Feb 2020 • André Artelt, Barbara Hammer
The increasing deployment of machine learning as well as legal regulations such as EU's GDPR cause a need for user-friendly explanations of decisions proposed by machine learning models.
1 code implementation • 4 Dec 2019 • Fabian Hinder, André Artelt, Barbara Hammer
The notion of drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time.
1 code implementation • 15 Nov 2019 • André Artelt, Barbara Hammer
Due to the increasing use of machine learning in practice it becomes more and more important to be able to explain the prediction and behavior of machine learning models.
1 code implementation • 2 Aug 2019 • André Artelt, Barbara Hammer
The increasing use of machine learning in practice and legal regulations like EU's GDPR cause the necessity to be able to explain the prediction and behavior of machine learning models.
1 code implementation • 25 Feb 2019 • Jan Philip Göpfert, André Artelt, Heiko Wersing, Barbara Hammer
Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue.