Search Results for author: André Artelt

Found 28 papers, 26 papers with code

Towards non-parametric drift detection via Dynamic Adapting Window Independence Drift Detection (DAWIDD)

1 code implementation ICML 2020 Fabian Hinder, André Artelt, CITEC Barbara Hammer

The notion of concept drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time; as a consequence machine learning models may become inaccurate and need adjustment.

A Two-Stage Algorithm for Cost-Efficient Multi-instance Counterfactual Explanations

1 code implementation2 Mar 2024 André Artelt, Andreas Gregoriades

Counterfactual explanations constitute among the most popular methods for analyzing the predictions of black-box systems since they can recommend cost-efficient and actionable changes to the input to turn an undesired system's output into a desired output.

counterfactual

The Effect of Data Poisoning on Counterfactual Explanations

1 code implementation13 Feb 2024 André Artelt, Shubham Sharma, Freddy Lecué, Barbara Hammer

Counterfactual explanations provide a popular method for analyzing the predictions of black-box systems, and they can offer the opportunity for computational recourse by suggesting actionable changes on how to change the input to obtain a different (i. e. more favorable) system output.

counterfactual Data Poisoning

Adversarial Attacks on Leakage Detectors in Water Distribution Networks

1 code implementation25 May 2023 Paul Stahlhofen, André Artelt, Luca Hermes, Barbara Hammer

Many Machine Learning models are vulnerable to adversarial attacks: There exist methodologies that add a small (imperceptible) perturbation to an input such that the model comes up with a wrong prediction.

"How to make them stay?" -- Diverse Counterfactual Explanations of Employee Attrition

no code implementations8 Mar 2023 André Artelt, Andreas Gregoriades

Employee attrition is an important and complex problem that can directly affect an organisation's competitiveness and performance.

counterfactual Counterfactual Explanation +1

Explainable Artificial Intelligence for Improved Modeling of Processes

1 code implementation1 Dec 2022 Riza Velioglu, Jan Philip Göpfert, André Artelt, Barbara Hammer

On the other hand, Machine Learning (ML) benefits from the vast amount of data available and can deal with high-dimensional sources, yet it has rarely been applied to being used in processes.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI)

"Explain it in the Same Way!" -- Model-Agnostic Group Fairness of Counterfactual Explanations

1 code implementation27 Nov 2022 André Artelt, Barbara Hammer

Counterfactual explanations are a popular type of explanation for making the outcomes of a decision making system transparent to the user.

counterfactual Decision Making +1

Unsupervised Unlearning of Concept Drift with Autoencoders

1 code implementation23 Nov 2022 André Artelt, Kleanthis Malialis, Christos Panayiotou, Marios Polycarpou, Barbara Hammer

Consequently, learning models operating on the data stream might become obsolete, and need costly and difficult adjustments such as retraining or adaptation.

Incremental Learning

Spatial Graph Convolution Neural Networks for Water Distribution Systems

1 code implementation17 Nov 2022 Inaam Ashraf, Luca Hermes, André Artelt, Barbara Hammer

We investigate the task of missing value estimation in graphs as given by water distribution systems (WDS) based on sparse signals as a representative machine learning challenge in the domain of critical infrastructure.

"Even if ..." -- Diverse Semifactual Explanations of Reject

2 code implementations5 Jul 2022 André Artelt, Barbara Hammer

In this work, we propose to explain rejects by semifactual explanations, an instance of example-based explanation methods, which them self have not been widely considered in the XAI community yet.

BIG-bench Machine Learning Conformal Prediction +2

Model Agnostic Local Explanations of Reject

1 code implementation16 May 2022 André Artelt, Roel Visser, Barbara Hammer

The application of machine learning based decision making systems in safety critical areas requires reliable high certainty predictions.

counterfactual Decision Making

Precise Change Point Detection using Spectral Drift Detection

no code implementations13 May 2022 Fabian Hinder, André Artelt, Valerie Vaquet, Barbara Hammer

The notion of concept drift refers to the phenomenon that the data generating distribution changes over time; as a consequence machine learning models may become inaccurate and need adjustment.

Change Point Detection

Keep Your Friends Close and Your Counterfactuals Closer: Improved Learning From Closest Rather Than Plausible Counterfactual Explanations in an Abstract Setting

1 code implementation11 May 2022 Ulrike Kuhl, André Artelt, Barbara Hammer

Following the view of psychological plausibility as comparative similarity, this may be explained by the fact that users in the closest condition experience their CFEs as more psychologically plausible than the computationally plausible counterpart.

counterfactual Experimental Design +2

SAM-kNN Regressor for Online Learning in Water Distribution Networks

1 code implementation4 Apr 2022 Jonathan Jakob, André Artelt, Martina Hasenjäger, Barbara Hammer

In this work, we propose an adaption of the incremental SAM-kNN classifier for regression to build a residual based anomaly detection system for water distribution networks that is able to adapt to any kind of change.

Anomaly Detection

Explaining Reject Options of Learning Vector Quantization Classifiers

1 code implementation15 Feb 2022 André Artelt, Johannes Brinkrolf, Roel Visser, Barbara Hammer

While machine learning models are usually assumed to always output a prediction, there also exist extensions in the form of reject options which allow the model to reject inputs where only a prediction with an unacceptably low certainty would be possible.

counterfactual Quantization

Convex optimization for actionable \& plausible counterfactual explanations

1 code implementation17 May 2021 André Artelt, Barbara Hammer

Transparency is an essential requirement of machine learning based decision making systems that are deployed in real world.

counterfactual Decision Making

Contrastive Explanations for Explaining Model Adaptations

1 code implementation6 Apr 2021 André Artelt, Fabian Hinder, Valerie Vaquet, Robert Feldhans, Barbara Hammer

We also propose a method for automatically finding regions in data space that are affected by a given model adaptation and thus should be explained.

Decision Making

Evaluating Robustness of Counterfactual Explanations

1 code implementation3 Mar 2021 André Artelt, Valerie Vaquet, Riza Velioglu, Fabian Hinder, Johannes Brinkrolf, Malte Schilling, Barbara Hammer

Counterfactual explanations explain a behavior to the user by proposing actions -- as changes to the input -- that would cause a different (specified) behavior of the system.

counterfactual Decision Making +1

Efficient computation of contrastive explanations

1 code implementation6 Oct 2020 André Artelt, Barbara Hammer

With the increasing deployment of machine learning systems in practice, transparency and explainability have become serious issues.

BIG-bench Machine Learning counterfactual

Convex Density Constraints for Computing Plausible Counterfactual Explanations

1 code implementation12 Feb 2020 André Artelt, Barbara Hammer

The increasing deployment of machine learning as well as legal regulations such as EU's GDPR cause a need for user-friendly explanations of decisions proposed by machine learning models.

BIG-bench Machine Learning counterfactual

A probability theoretic approach to drifting data in continuous time domains

1 code implementation4 Dec 2019 Fabian Hinder, André Artelt, Barbara Hammer

The notion of drift refers to the phenomenon that the distribution, which is underlying the observed data, changes over time.

Change Point Detection

On the computation of counterfactual explanations -- A survey

1 code implementation15 Nov 2019 André Artelt, Barbara Hammer

Due to the increasing use of machine learning in practice it becomes more and more important to be able to explain the prediction and behavior of machine learning models.

BIG-bench Machine Learning counterfactual

Efficient computation of counterfactual explanations of LVQ models

1 code implementation2 Aug 2019 André Artelt, Barbara Hammer

The increasing use of machine learning in practice and legal regulations like EU's GDPR cause the necessity to be able to explain the prediction and behavior of machine learning models.

BIG-bench Machine Learning counterfactual +2

Adversarial attacks hidden in plain sight

1 code implementation25 Feb 2019 Jan Philip Göpfert, André Artelt, Heiko Wersing, Barbara Hammer

Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.